Backup all the things: PowerProtect DataManager to protect Generic Application Workloads

Summary

Modern enterprises depend on reliable, fast, and auditable data protection—especially as more business-critical data ends up in the cloud. Ensuring this data is backed up securely, with full auditability and seamless integration to on-premises data protection infrastructure, is essential. Let’s look at how Dell’s PowerProtect Data Manager and DataDomain BoostFS help you easily orchestrate and automate these cloud-to-datacenter backup workflows—using a practical, script-driven example.


Successful Generic Agent Backup

Automating Cloud Backups: The Script Workflow

This backup automation script enables IT teams to automatically protect cloud or remote data by transferring it directly to secure, high-performance Dell DataDomain storage using BoostFS. The process is simple, yet robust—ideal for both full system backups and incremental “change-only” protection.

How the Workflow Operates

  1. Source Selection via rclone:
    The script uses [rclone], a flexible command-line tool, to connect to virtually any cloud or remote storage. With pre-set cloud profiles, it’s easy to target the right buckets or directories with the right credentials.

  2. Backup Target – Dell DataDomain BoostFS:
    Backups are sent to a local mount point, provided by BoostFS. This mount absorbs cloud data and writes it directly to your DataDomain system, bringing enterprise-grade deduplication, performance, and reliability to the workflow.

  3. Orchestration with PowerProtect Data Manager (PPDM) Generic Application Agent:
    PowerProtect’s agent manages, schedules, and monitors the backup, integrating cloud data protection into your overall enterprise backup policy.

  4. Automation & Auditability:
    Choose between full or incremental backups (backup of all files or only files recently changed), schedule parallel transfers for efficiency, and leverage fully timestamped, auto-rotating logs for regulatory and troubleshooting needs.


Script Features at a Glance

  • Full & Incremental Backups: Select ‘FULL’ for comprehensive protection or ‘LOG’ for rapid, incremental changes (ideal for nightly or near-continuous backup).
  • Support for Any Cloud: Any rclone-compatible storage is supported—public, private, or hybrid clouds.
  • Efficient Parallel Transfers: Adjust the number of simultaneous copy operations for faster backups.
  • Intelligent Log Rotation: Automatic retention and clean-up of logs removes hassle and helps with compliance.
  • Customizable Age Filters: Limit which files are backed up (e.g., only files changed in the last day—perfect for large object storage).
  • Fail-Safe Operation: Strong error handling ensures you know right away if a backup didn’t complete as planned.

Dell Product Functionality in Action

PowerProtect Data Manager Generic Application Agent

  • Flexible Orchestration:
    PowerProtect’s Generic Application Agent enables you to define and control custom backup jobs—even from outside-the-box sources like cloud storage—within your existing enterprise data protection framework.

  • Policy-Driven Protection:
    Integrate ad-hoc cloud backup into centralized, policy-based enterprise backup schedules and compliance reporting.

  • Monitoring & Reporting:
    All backups, regardless of origin, are visible and managed from a single dashboard—simplifying audits and troubleshooting.

DataDomain BoostFS

  • Seamless Filesystem Integration:
    BoostFS lets you mount your DataDomain system as a standard Linux directory, making it simple for scripts and applications to send backups without having to know about underlying deduplication or replication mechanics.

  • Enterprise Data Services:
    Data is instantly protected with DataDomain’s powerful deduplication, security, and high-speed ingest—reducing storage reserve needs and backup windows.

  • High-Speed, Low-Impact:
    BoostFS optimizes backup throughput, minimizes network utilization, and ensures data is ready for fast restore if disaster strikes.


Example: Cloud-to-DataDomain Backup in Action

Here’s how a typical backup looks in practice:

export DD_TARGET_DIRECTORY="/mnt/ddboost/my_backups"
export BACKUP_LEVEL="FULL"

./s3_backup_rclone.sh \
  -b my-cloud-bucket \
  -c company-cloud-profile \
  -p "/2025/July/" \
  -s 8 \
  -i 24h \
  -f 168h
  • This backs up everything from a specific cloud bucket (limited by prefix, if desired), using 8 parallel copy streams.
  • Full and Incremental options are set with -f and -i, determining the age range of files for each backup type.
  • Logs for each run are kept and rotated automatically—the 5 most recent backup reports are always available for compliance and troubleshooting.

Why It Matters: Robust Cloud Data Protection, Simplified

With cloud data now central to so many IT operations, organizations need to bridge the gap between cloud and on-prem storage without creating silos or complexity. By combining this script-driven workflow with Dell’s PowerProtect Data Manager and DataDomain BoostFS:

  • Protect cloud and SaaS data in-place, using your own trusted infrastructure
  • Apply unified compliance, policy, and audit controls across all data sources
  • Automate, monitor, and report on everything—no manual gaps
  • Minimize cost and maximize speed with industry-leading deduplication and throughput

Technical Details

Key Features

  • Full & Incremental Backups: Choose between complete or recent-changes-only backups.
  • Cloud Agnostic: Works with any rclone-compatible cloud provider.
  • Parallel Transfers: Speed up backups with configurable streams.
  • Log Rotation: Keeps only the latest logs for easy diagnostics.
  • Fail-fast: Errors are logged and the script exits on failure.

Prerequisites

  • Linux/UNIX system with Bash
  • rclone installed and configured
  • PowerProtect DataManager Generic Application Agent
  • Environment variables:
    • DD_TARGET_DIRECTORY (BoostFS mountpoint)
    • BACKUP_LEVEL (FULL or LOG)

Usage Example

export DD_TARGET_DIRECTORY="/data/company_backups"
export BACKUP_LEVEL="FULL"

./backup-script.sh \
    -b my-databucket \
    -c my-cloud-profile \
    -p "/nightly" \
    -s 8 \
    -i 24h \
    -f 168h

How It Works

  1. Rotates logs for troubleshooting.
  2. Validates parameters and environment.
  3. Runs either a full or incremental backup using rclone.
  4. Transfers files with parallel streams and age filters.
  5. Logs all actions and exits on error.

Troubleshooting

Check /var/log/rclone.log for details. Only the 5 most recent logs are kept.

Onboarding the Asset into PPDM

Generic Agent Configuration

1. Verify Compatibility

  • Ensure both your PPDM instance and intended application workload are supported.
  • Review the official support matrix for OS and application compatibility.

2. Install the Generic Application Agent

  • Download the agent package from PPDM’s software repository.
  • Install the agent on the host system where the application resides, following the platform-specific installation steps found in the official user guide.

3. Configure Backup Scripts

  • Create and configure scripts for backup operations, tailored to your application’s requirements.
  • Store these scripts in a directory accessible to both the agent and PPDM.

4. Discover and Register the Agent in PPDM

  • Approve the application host in PPDM. The system should automatically recognize the installed agent.

5. Create an Asset for Generic Application

  • In PPDM, define a new Asset for generic Agent, this includes:
    • Connection Credentials
    • Asset Host that acts as a Proxy Datammover
    • Selection the Backup Script to use
    • Parameters for the Backup Script
Generic Agent Parameters

Assign the Asset to the desired Policy, or create a New Policy for generic Agent

Best Practices

  • Set required environment variables.
  • Pre-configure rclone profiles.
  • Use appropriate file age formats (24h, 7d, etc.).
  • Run as a user with necessary permissions.

you can find my Example Script here Dell Examples Github

Closing Thoughts

Cloud-native data protection doesn’t have to mean re-inventing your backup strategy. With Dell’s PowerProtect Data Manager Generic Application Agent and DataDomain BoostFS, you can extend your trusted enterprise data protection to the cloud—simply, securely, and with total control.

This ready-to-use backup automation script is just one example of how Dell continues to help organizations safeguard their modern, hybrid environments. Looking for a tailored demo or deeper integration advice? Dell’s data protection specialists are ready to help you architect and optimize your solution.

License & Author

© 2025 Dell Inc.
Author: Karsten Bott (karsten.bott@dell.com)

Customizing PowerProtect Datamanager  Pods to use Node Affinity by patching K8S Inventory Source

Create Customized POD Configs for PPDM Pod´s

Disclaimer: use this at your own Risk

Warning, Contains JSON and YAML !!!

why that ?

The PowerProtect DataManager Inventory Source uses Standardized Configurations and Configmaps for the Pods deployed by PPDM, e.gh. cProxy, PowerProtect Controller, as well as Velero.

In this Post, i will use an example to deploy the cProxies to dedicated nodes using Node Affinity. This makes Perfect sense if you want to separate Backup from Production Nodes.

More Examples could include using cni plugins like multus, dns configurations etc.

The Method described here is available from PPDM 19.10 and will be surfaced to the UI in Future Versions

1. what we need

The below examples must be run from a bash shell. We will use jq to modify json Documents.

2. Adding labels to worker nodes

in order to use Node Affinity for our Pods, we first need to label nodes for dedicated usage.

In this Example we label a node with tier=backup

A corresponding Pod Configuration example would look like:

apiVersion: apps/v1
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: tier
            operator: In # tag must be *in* tags
            values:
            - backup            
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent

We will use a customized template section later for our cProxy

So first, tag the Node(s) you want to use for Backup:

kubectl label nodes ocpcluster1-ntsgq-worker-local-2xl2z tier=backup

2. Create a Configuration Patch for the cProxy

We create manifest Patch for the cProxy from a yaml Document. This will be base64 encoded and presented as a Value to POD_CONFIG type to our Inventory Source The API Reference describes the Format of the Configuration.

The CPROXY_CONFIG variable below will contain the base64 Document

CPROXY_CONFIG=$(base64 -w0 <<EOF
---
metadata:
  labels:
    app: cproxy
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: tier
            operator: In
            values:
            - backup      
EOF
)    

3. Patching the inventory Source using the PowerProtect Datamanager API

You might want to review the PowerProtect Datamanager API Documentation

for the Following Commands, we will leverage some BASH Variables to specify your environment:

PPDM_SERVER=<your ppdm fqdn>
PPDM_USERNAME=<your ppdm username>
PPDM_PASSWORD=<your ppdm password>
K8S_ADDRESS=<your k8s api address the cluster is registered with, see UI , Asset Sources -->, Kubernetes--> Address>

3.1 Logging in to the API and retrieve the Bearer Token

The below code will read the Bearer Token into the TOKEN variable

TOKEN=$(curl -k --request POST \
  --url https://${PPDM_SERVER}:8443/api/v2/login \
  --header 'content-type: application/json' \
  --data '{"username":"'${PPDM_USERNAME}'","password":"'${PPDM_PASSWORD}'"}' | jq -r .access_token)

3.2 Select the Inventory Source ID based on the Asset Source Address

Select inventory ID matching your Asset Source Address :

K8S_INVENTORY_ID=$(curl -k --request GET https://${PPDM_SERVER}:8443/api/v2/inventory-sources \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${TOKEN}" \
| jq --arg k8saddress "${K8S_ADDRESS}" '[.content[] | select(.address==$k8saddress)]| .[].id' -r)

3.3 Read Inventory Source into Variable

With the K8S_INVENTORY_ID from above, we read the Inventory Source JSON Document into a Variable

INVENTORY_SOURCE=$(curl -k --request GET https://${PPDM_SERVER}:8443/api/v2/inventory-sources/$K8S_INVENTORY_ID \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${TOKEN}")

3.4 Adding the Patched cProxy Config to the Variable

Using jq, we will modify the Inventory Source JSON Document to include our base64 cproxy Config.

for that, we will add an list with content

"configurations": [
        {
          "type": "POD_CONFIG",
          "key": "CPROXY",
          "value": "someBase64Document"
        }
      ]

Other POD_CONFIGS key´s we could modify are POWERPROTECT_CONTROLLER and VELERO the below json code will take care for this:

INVENTORY_SOURCE=$(echo $INVENTORY_SOURCE| \
 jq --arg cproxyConfig "${CPROXY_CONFIG}" '.details.k8s.configurations += [{"type": "POD_CONFIG","key": "CPROXY", "value": $cproxyConfig}]')

3.4 Patching the Inventory Source in PPDM

We now use a POST request to upload the Patched inventory Source Document to PPDM

curl -k -X PUT https://${PPDM_SERVER}:8443/api/v2/inventory-sources/$K8S_INVENTORY_ID \
--header "Content-Type: application/json" \
--header "Authorization: Bearer $TOKEN" \
-d "$INVENTORY_SOURCE"

We can verify the patch by checking the PowerProtect Configmap in the PPDM Namespace.

kubectl get configmap  ppdm-controller-config -n powerprotect -o=yaml

The configmap must now contain cproxy-pod-custom-config in data

cproxy-pod-custom-config
cproxy-pod-custom-config

The next Backup job will now use the tagged node(s) for Backup !

3.5 Start a Backup and verify the node affinity for cproxy pod

First, identify the node you labeled

kubectl get nodes -l 'tier in (backup)'

once the backup job creates the cproxy, this should be done on one of the identified nodes:

kubectl get pods -n powerprotect -o wide
cproxy-pod-custom-config
validate_pod_affinity

This concludes the Patching of cproxy Configuration, stay tuned for more

Configure PowerProtect Datamanger 19.11 with disconnected OpenShift Cluster´s

Installation and troubleshooting Tips for PPDM 19.11 in disconnected OpenShift 4.10 Environments

Disclaimer: use this at your own Risk

Warning, Contains JSON and YAML !!!

why that ?

I am happily running some disconnected OpenShift Clusters on VMware and AzureStack Hub. This Post is about how to Protect a Greenfield Deployment of OpenShift 4.10 with PPDM 19.11, running on vSphere.

OpenShift Container Platform 4.10 installs the vSphere CSI Driver Operator and the vSphere CSI driver by default in the openshift-cluster-csi-drivers namespace.
This setup is different from a user Deployed CSI driver or CSI Drivers as a Process like in TKGi.

In this Post, i will point out some Changes required to make OpenShift 4.10 fresh installs work with PPDM 19.11

1. what we need

In order to support Disconnected OpenShift Clusters, we need to prepare our environment to Host certain Images and Operators on a local Registry.

1.1 Mirroring the Operator Catalog for the OADP Provider

PPDM utilized the RedHat OADP (OpenShift APIs for Data Protection) Operator vor Backup and Restore. Therefore, we need to Replicate redhat-operators from the index image registry.redhat.io/redhat/redhat-operator-index:v4.10 This is well documented on Using Operator Lifecycle Manager on restricted networks, specifically if you want to filter the Catalog to a specific list of Packages.

If you want to mirror the entire Catalog, you can utilize oc adm catalog mirror

Once the Replication is done, make sure apply the ImageContentSourcePolicy and the new CatalogSource itself.

1.2 Mirroring the DELL PowerProtect Datamanager and vSphere/Velero Components

The Following Images need to be Replicated to your local Registry:

Image Repository Tag
dellemc/powerprotect-k8s-controller https://hub.docker.com/r/dellemc/powerprotect-k8s-controller/tags 19.10.0-20
dellemc/powerprotect-cproxy https://hub.docker.com/r/dellemc/powerprotect-cproxy/tags 19.10.0-20
dellemc/powerprotect-velero-dd https://hub.docker.com/r/dellemc/powerprotect-velero-dd/tags 19.10.0-20
velero/velero https://hub.docker.com/r/velero/velero/tags v1.7.1
vsphereveleroplugin/velero-plugin-for-vsphere https://hub.docker.com/r/vsphereveleroplugin/velero-plugin-for-vsphere/tags v1.3.1
vsphereveleroplugin/backup-driver https://hub.docker.com/r/vsphereveleroplugin/backup-driver/tags v1.3.1

2. Setup

2.1 Configure PPDM to point to the local Registry

On the PPDM Server, point the CNDM (Cloud Native Datamover )Service to the Registry that is hosting your PPDM and Velero Images by editing /usr/local/brs/lib/cndm/config/application.properties Put the FQDN for k8s.docker.registry to reflect your local registry:

k8s.docker.registry=harbor.pks.home.labbuildr.com

This will instruct cndm to point to your registry when creating configmaps for PPDM You need to restart the Service in order for changes to affect.

cndm restart

2.2 Configure a Storage Class with volumeBindingMode: Immediate

The vSphere CSI Operator Driver storage class uses vSphere’s storage policy. OpenShift Container Platform automatically creates a storage policy that targets datastore configured in cloud configuration. However, this uses volumeBindingMode: WaitForFirstConsumer. In order to restore PVC´s to a new Namespace, we need a StorageClass with volumeBindingMode: Immediate Make sure the StoragePolicyName reflects ure existing Policy ( from Default Storage Class )

oc apply -f - <<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: thin-csi-immediate
  annotations:
    storageclass.kubernetes.io/is-default-class: 'true'
provisioner: csi.vsphere.vmware.com
parameters:
  StoragePolicyName: openshift-storage-policy-ocs1-nv6xw # <lookup you policy>
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
EOF      

2.3 Deploy the RBAC Templates from PPDM

RBAC Templates containing PPDM CRD´s, RoleBindings and ClusterRoleBindings,Roles and ConfigMaps and ServiceAccount Definitions can found on PPDM Server in /usr/local/brs/lib/cndm/misc/rbac.tar.gz.
Always use the Latest version that comes with your PPDM version !
Once you copied and extracted the Templates, apply the to your cluster:

oc apply -f ../ocs_vsphere/ppdm-rbac/rbac/ppdm-discovery.yaml
oc apply -f ../ocs_vsphere/ppdm-rbac/rbac/ppdm-controller-rbac.yaml

2.4 Retrieve the PPDM Discovery Service Account Token

Applying the above Templates has also created the Powerprotect namespace and Discovery Token. Retrieve the token with

export PPDM_K8S_TOKEN=$(oc get secret \
"$(oc -n powerprotect get secret \
 | grep ppdm-discovery-serviceaccount-token \
 | head -n1 | awk '{print $1}')" \
-n powerprotect --template={{.data.token}} | base64 -d      

2.5 Add the OpenShift Cluster to PPDM

If you not have already enabled the Kubernetes Asset Source in PPDM, do so now. To do so, go to Infrastructure > AssetSources > + In the New Asset Source Tab, Click on Enable Source for the Kubernetes Asset Source

Enable the OCS Cluster

Select the Kubernetes tab and click Add. Provide your Cluster information and click on Add credentials. Provide a Name for the Discovery Service Account and Provide the Token extracted in the Previous Step

Adding Credetials

Click on Verify to Validate the Certificate of the OpenShift Cluster hint: if you use unknown root ca, follow the user guide to add the ca to ppdm using the ppdmtool

Adding the OCS Cluster

The the powerprotect controller and Configmap should now being deployed to powerprotect namespace

To view the configmap, execute

oc get configmap  ppdm-controller-config -n powerprotect -o=yaml

to view the logs, execute:

oc logs "$(oc -n powerprotect get pods \
| grep powerprotect-controller \
| awk '{print $1}')" -n powerprotect -f

if the controller is waiting for velero-ppdm ip,

Waiting for velero-ppdm

then probably your oadp-operator subscription is pointing to the wrong operator catalog source. View your subscription with

oc describe subscription redhat-oadp-operator -n velero-ppdm
  Conditions:
    Last Transition Time:  2022-04-22T08:54:35Z
    Message:               targeted catalogsource openshift-marketplace/community-operators missing
    Reason:                UnhealthyCatalogSourceFound
    Status:                True
    Type:                  CatalogSourcesUnhealthy
    Message:               constraints not satisfiable: no operators found from catalog community-operators in namespace openshift-marketplace referenced by subscription oadp-operator, subscription oadp-operator exists
    Reason:                ConstraintsNotSatisfiable
    Status:                True
    Type:                  ResolutionFailed
  Last Updated:            2022-04-22T08:54:35Z
Events:                    <none>

In this case, we need to patch the Subscription to reflect the Catalog Name we created for our custom Operator Catalog ( in this example, redhat-operator-index):

oc patch subscription redhat-oadp-operator -n velero-ppdm --type=merge -p '{"spec":{"source": "redhat-operator-index"}}'

This should deploy the RedHat OADP Operator in the velero-ppdm namespace. You can view the Operator Status from your OpenShift Console

Waiting for OADP Install

Now it is time to watch the Pods Created in the velero-ppdm Namespace:

oc get pods -n velero-ppdm
      
Waiting for Pods

The Backup driver fails to deploy, so we need to have a look at the logs:

oc logs "$(oc -n velero-ppdm get pods | grep backup-driver | awk '{print $1}')" -n velero-ppdm -f    
Backup Driver Logs

Obviously, the OpenShift Cluster Operator deployed CSI drivers, but no configuration was made for the velero-vsphere-plugin to point to a default secrets for velero backup-driver (different from standard CSI Installation ) So we need to create a secret from a vSphere config File hosting the clusterID and the vSphere Credentials, and apply a configmap to ppdm to use the secret.

Use the following example to create a config file with your vCenter csi account:

cat > csi-vsphere.conf <<EOF
[Global]
cluster-id = "$(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}')"
[VirtualCenter "vcsa1.home.labbuildr.com"]
insecure-flag = "true"
user = <your vsphere csi user>
password = <your csi acccount secret>
port = 443
datacenters = "<your dc>"
EOF   

Create a Secret bfrom that file using:

oc -n velero-ppdm create secret generic velero-vsphere-config-secret --from-file=csi-vsphere.conf  

Apply the configmap for the velero vSphere Plugin

oc -n velero-ppdm apply -f - <<EOF
kind: ConfigMap
apiVersion: v1
metadata:
  name: velero-vsphere-plugin-config
  namespace: velero-ppdm
  labels:
    app.kubernetes.io/part-of: powerprotect.dell.com
data:
  cluster_flavor: VANILLA
  vsphere_secret_name: velero-vsphere-config-secret
  vsphere_secret_namespace: velero-ppdm
EOF

And delete the existing backup-driver pod

oc delete pod "$(oc -n velero-ppdm get pods | grep backup-driver | awk '{print $1}')" -n velero-ppdm

We can see the results with:

oc logs "$(oc -n velero-ppdm get pods | grep backup-driver | awk '{print $1}')" -n velero-ppdm
Backup Driver Logs

3. Run a backup !

Follow the Admin Guide to create a Protection Policy for your Discovered OpenShift Assets and select a Namespace with a PVC. Run the Policy and examine the Status Details.

Error in Step Log
CSI error for FCD PVC

Unfortunately, PPDM tries to trigger a CSI based Backup instead of a FCD based PVC Snapshot. This is because upon initial discovery, PPDM did not correctly identify a “VANILLA_VSPHERE” Cluster, as the cluster-csi-operator uses a different install location for the csi drivers. This only Occurs on 4.10 Fresh installs, not on updates from 4.9 to 4.10 !

What we need to do now, is patching the Inventory Source of Our OpenShift Cluster in PPDM using the REST API.

4. Patching a PPDM Inventory Source for fresh deployed 4.10 Clusters

4.1 Get and use the bearer token for subsequent requests

      
PPDM_SERVER=<your ppdm fqdn>
PPDM_USERNAME=<your ppdm username>
PPDM_PASSWORD=<your ppdm password>
TOKEN=$(curl -k --request POST \
  --url "https://${PPDM_SERVER}:8443/api/v2/login" \
  --header 'content-type: application/json' \
  --data '{"username":"'${PPDM_USERNAME}'","password":"'${PPDM_PASSWORD}'"}' | jq -r .access_token)
      

4.2 Get the inventory id´s for vCenter and the OpenShift cluster

      
K8S_INVENTORY_ID=$(curl -k --request GET "https://${PPDM_SERVER}:8443/api/v2/inventory-sources" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${TOKEN}" \
| jq '[.content[] | select(.address=="api.ocs1.home.labbuildr.com")]| .[].id' -r)

VCENTER_ID=$(curl -k --request GET "https://${PPDM_SERVER}:8443/api/v2/inventory-sources" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${TOKEN}" \
| jq '[.content[] | select(.address=="vcsa1.home.labbuildr.com")]| .[].id' -r)
      

4.3 query the distribution type field of the kubernetes inventory source by using above K8S_INVENTORY_ID

 
curl -k --request GET "https://${PPDM_SERVER}:8443/api/v2/inventory-sources/$K8S_INVENTORY_ID" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${TOKEN}" | jq .details.k8s.distributionType
 

If the Distribution type shows NON_VSPHERE, we need to patch it

Error in Step Log
Wrong Distribution Type

4.4 Read the inventory into variable:

 
INVENTORY_SOURCE=$(curl -k --request GET "https://${PPDM_SERVER}:8443/api/v2/inventory-sources/$K8S_INVENTORY_ID" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer ${TOKEN}")
 

4.5 Patch the variable content using jq:

 
INVENTORY_SOURCE=$(echo $INVENTORY_SOURCE \
| jq --arg distributionType "VANILLA_ON_VSPHERE" '.details.k8s.distributionType |= $distributionType')
INVENTORY_SOURCE=$(echo $INVENTORY_SOURCE \
| jq --arg vCenterId "$VCENTER_ID" '.details.k8s.vCenterId |= $vCenterId')
 

4.6 validate the variable using jq:

 
echo $INVENTORY_SOURCE| jq .details.k8s
 
Error in Step Log
Correct Distribution Type

4.7 And Patch that PPDM !!!

curl -k -X PUT "https://${PPDM_SERVER}:8443/api/v2/inventory-sources/$K8S_INVENTORY_ID" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer $TOKEN" \
-d "$INVENTORY_SOURCE"

5. Run the Protection Policy Again

In the Job List in PPDM, select the failed job and click on restart

Error in Step Log
Restart Job

The Job should now run Successful, and take PVC Snapshots (using First Class Disks) instead of CSI Snapshots

Successful Job
Successful Job
Update DELL PowerProtect Datamanager Appliance using Ansible Roles and Playbooks

Update Dell PowerProtect Datamanager Appliance using Ansible and REST API

Disclaimer: use this at your own Risk

why that ?

Automation is everywhere. Automation should your standards. Automation should be based on (Rest) API´s.
As our customers broadly use Ansible to automate standard IT Tasks, i created this Example use case to manage Dell PowerProtect Datamanager´s Update lifecycle.
My Friend Preston De Guise has written a nice Blogpost on PowerProtect Data Manager 19.11 – What’s New, and Updating

We will do all of his outlined steps to update a PowerProtect Datamanager from 19.10 to 19.11 using Ansible.

The individual API Calls are Organized in Ansible roles and will be executed from corresponding Playbooks

what we need

In general, all of the following playbooks / roles are built with the Ansible URI module ( with one exception :-) ) They have been tested from Ubuntu Linux ( running from WSL2 ), best “Ubuntu 20.04.4 LTS” for python 3.8 support

where to get the REST API calls

we can find the REST API calls required / used in my Examples on DELL Technologies Developer Portal

i am happy to share my ansible roles per request for now. so let us walk trough a typical update scenario

1. Uploading the Update Package

1.1 the get_ppdm_token role to authenticate with the api endpoint

All of the playbooks require to authenticate via the API endpoint and use a Bearer token for Subsequent requests. so i use a role called get_ppdm_token to retrieve the token from the API:

# role to get a Bearer Token from the PPDM API
- name: Get PPDM Token
  uri:
    method: POST
    url: ":/login"
    body:
      username: ""
      password: ""
    body_format: json 
    status_code: 200
    validate_certs: false
  register: result  
  when: not ansible_check_mode 
- set_fact:
    access_token: ""

1.2 the upload_update role for uploading a package to PPDM

to upload an update package, we have to use a curl via an ansible shell call ( this being the one exception :-) ), as an upload vi URI Module fails for files > 2GB

# note: using curl here as uploads >=2GB still fail in ansible uri module ....
- name: Uploading Update File , this may take a while
  shell: 'curl -ks :/upgrade-packages \
      -XPOST \
      -H "content-type: multipart/form-data"  \
      -H "authorization: Bearer " \
      --form file=@'

1.3 Playbook to upload the Update Package

The following snippet is an example of an Upload Playbook

# Example Playbook to configure PPDM Certificates
- name: Update PPDM
  hosts: localhost
  connection: local
  vars_files: 
    - ./vars/main.yml
  tasks:
  - name: Setting Base URL
    set_fact: 
      ppdm_baseurl: "https://{{ ppdm_fqdn | regex_replace('^https://') }}"     
  - name: Get PPDM Token for https://{{ ppdm_fqdn | regex_replace('^https://') }}
    include_role: 
      name: get_ppdm_token
    vars: 
      ppdm_password: "{{ ppdm_new_password }}"
  - debug: 
      msg: "{{ access_token }}"
      verbosity: 1
    name: do we have a token ?  
  - name: find PowerProtect update software at input mapping
    find:
      paths: "{{ lookup('env','PWD') }}/files"
      patterns: 'dellemc-ppdm-upgrade-sw-*.pkg'
    register: result
  - name: Setting Upload Filename
    set_fact:
      upgrade_file: "{{ result.files[0].path }}"
    when: result.files is defined     
  - name: parsed upload file
    debug: 
      msg: "{{ upgrade_file }}"
  - name: Uploading Update
    include_role: 
      name: upload_update
    vars: 
      upload_file: "{{ upgrade_file }}"   
      

The Playbook will run like this:

Playbook Upload Update

2. Pre Checking the Update

The Update Pre Check will inform you about Required Actions need to be taken in order to make the update succeed. It will also show us Warnings for issues we might correct prior the Update.

2.1 the get_ppdm_update_package role for getting the Update ID

PPDM has an API endpoint to get all Updates in the System. This could be active or historical Updates. The Response Contains JSON Documents with specific information and state about the Update

- name: Define uri with package ID
  set_fact: 
    uri: "{{ ppdm_baseurl }}:{{ ppdm_port }}{{ ppdm_api_ver }}/upgrade-packages/{{ id }}"
  when: id  | default('', true) | trim != ''
- name: Define uri without package id
  set_fact: 
    uri: "{{ ppdm_baseurl }}:{{ ppdm_port }}{{ ppdm_api_ver }}/upgrade-packages"
  when: id  | default('', false) | trim == ''
- name: Rewrite uri with filter
  set_fact: 
    uri: "{{ uri }}?filter={{ filter | urlencode }}"  
  when: filter  | default('', true) | trim != ''
- name: Getting Update Package State
  uri:
    method: GET
    url: "{{ uri }}"
    body_format: json
    headers:
      accept: application/json
      authorization: "Bearer {{ access_token }}"    
    status_code: 200,202,403
    validate_certs: false
  register: result  
  when: not ansible_check_mode 
- set_fact:
    update_package: "{{ result.json.content }}"
  when: id  | default('', false) | trim == ''
- set_fact:
    update_package: "{{ result.json }}"
  when: id  | default('', true) | trim != ''      
- debug:
    msg: "{{ update_package }}"
    verbosity: 1
      

This task allows us to get info a specific update by using the update ID, or filter an update by type ( e.g. ACTIVE or HISTORY )

2.2 example Playbook to get an ACTIVE Update Content

# Example Playbook to get an ACTIVE Package
#
# Note: Api is still called Upgrade, but Upgrade is for Hardware only !
# SO tasks will be named update but execute Upgrade
#
- name: get PPDM Update Package
  hosts: localhost
  connection: local
  vars_files: 
    - ./vars/main.yml
  tasks:
  - name: Setting Base URL
    set_fact: 
      ppdm_baseurl: "https://{{ ppdm_fqdn | regex_replace('^https://') }}"     
  - name: Get PPDM Token for https://{{ ppdm_fqdn | regex_replace('^https://') }}
    include_role: 
      name: get_ppdm_token
    vars: 
      ppdm_password: "{{ ppdm_new_password }}"
  - debug: 
      msg: "{{ access_token }}"
      verbosity: 1
    name: do we have a token ?  
  - name: get Update Package
    include_role: 
      name: get_ppdm_update_package
    vars:
      filter: 'category eq "ACTIVE"' 
  - debug: 
      msg: "{{ update_package }}"

      

This example shows the JSON response for the Update Package with Essential information for example Versions,Required Passwords/Reboot, Updated EULA´s, ID etc:

Example Upgrade Package

2.3 the precheck_ppdm_update_package role

After reviewing the Information, we will trigger the Pre Check using the precheck endpoint. This will transform the Package state to Processing. The below Precheck role will wait for the state AVAILABLE and the Validation Results:

# Example Playbook to Precheck PPDM Update
#
# Note: Api is still called Upgrade, but Upgrade is for Hardware only !
# SO tasks will be named update but execute Upgrade
#
- name: precheck PPDM Update Package
  uri:
    method: POST
    url: "{{ ppdm_baseurl }}:{{ ppdm_port }}{{ ppdm_api_ver }}/upgrade-packages/{{ id }}/precheck"
    body_format: json
    headers:
      accept: application/json
      authorization: "Bearer {{ access_token }}"    
    status_code: 202,204,401,403,404,409,500
    validate_certs: false
  register: result  
  retries: 10
  delay: 30
  until: result.json.state is defined and result.json.state == "PROCESSING"
  when: not ansible_check_mode  
- set_fact:
    update_precheck: "{{ result.json }}"
- debug:
    msg: "{{ update_precheck }}"
    verbosity: 1
- name: Wait  Update Package State
  uri:
    method: GET
    url: "{{ ppdm_baseurl }}:{{ ppdm_port }}{{ ppdm_api_ver }}/upgrade-packages/{{ id }}"
    body_format: json
    headers:
      accept: application/json
      authorization: "Bearer {{ access_token }}"    
    status_code: 200,202,403
    validate_certs: false
  register: result 
  retries: 10 
  delay:  10
  when: not ansible_check_mode 
  until: result.json.validationDetails is defined and result.json.state == "AVAILABLE"
- set_fact:
    validation_details: "{{ result.json.validationDetails }}"   
- debug:
    msg: "{{ validation_details }}"
    verbosity: 1    
      

2.4 Running the Precheck Playbook

The Corresponding Playbook for above task will may look like this:

# Example Playbook to Precheck PPDM Update
#
# Note: Api is still called Upgrade, but Upgrade is for Hardware only !
# SO tasks will be named update but execute Upgrade
#
- name: Upgrade PPDM
  hosts: localhost
  connection: local
  vars_files: 
    - ./vars/main.yml
  tasks:
  - name: Setting Base URL
    set_fact: 
      ppdm_baseurl: "https://{{ ppdm_fqdn | regex_replace('^https://') }}"     
  - name: Get PPDM Token for https://{{ ppdm_fqdn | regex_replace('^https://') }}
    include_role: 
      name: get_ppdm_token
    vars: 
      ppdm_password: "{{ ppdm_new_password }}"
  - debug: 
      msg: "{{ access_token }}"
      verbosity: 1
    name: do we have a token ?  
  - name: get Update Package
    include_role: 
      name: get_ppdm_update_package
    vars:
      filter: 'category eq "ACTIVE"' 

  - name: delete_ppdm_upgrade
    when: update_package[0] is defined and update_package[0].upgradeError is defined
    include_role: 
      name: delete_ppdm_update_package
    vars:
      id: "{{ update_package[0].id }}"
  - name: precheck_ppdm_upgrade
    when: update_package[0] is defined and update_package[0].upgradeError is not defined
    include_role: 
      name: precheck_ppdm_update_package
    vars:
      id: "{{ update_package[0].id }}"
  - debug:
      msg: "{{ validation_details }}"
      verbosity: 0          
      

The Playbook will output the validation Details:

Validation Details

3. Executing the Upgrade

Once the Update Validation has passed, we can execute the Update

3.1 the install_ppdm_update_package role

The below example task shows the upgrade-packages endpoint to be called A JSON Body needs to be provided with additional information. We wil create the Body from the Calling Playbook

- name: install PPDM Update Package
  uri:
    method: PUT
    url: "{{ ppdm_baseurl }}:{{ ppdm_port }}{{ ppdm_api_ver }}/upgrade-packages/{{ id }}?forceUpgrade=true"
    body_format: json
    body: "{{ body }}"
    headers:
      accept: application/json
      authorization: "Bearer {{ access_token }}"    
    status_code: 202,204,401,403,404,409,500
    validate_certs: false
  register: result  
  when: not ansible_check_mode  
- set_fact:
    update: "{{ result.json }}"
- debug:
    msg: "{{ update }}"
    verbosity: 0
      

3.2 the check_update_done role

Once the Update is started, PPDM sysmgr and other components will shut down. PPDM has a specific API Port to Monitor the Update( also from the UI, Port 14443) The check_update_done role will wait until the Update reaches 100%

- name: Wait for Appliance Update Done, this may take a few Minutes
  uri:
    method: GET
    url: "{{ ppdm_baseurl }}:14443/upgrade/status"
    status_code: 200 
    validate_certs: false
  register: result  
  when: not ansible_check_mode 
  retries: 100
  delay: 30
  until: ( result.json is defined and result.json[0].percentageCompleted == "100" ) or ( result.json is defined and result.json[0].upgradeStatus =="FAILED" )
- debug: 
    msg: "{{ result.json }}"
    verbosity: 0
  when: result.json is defined  
      

3.3 Playbook to install the Update

to install the update, we will modify the update-package JSON document to change the state field to INSTALLED. Also, we will approve updated EULA Settings and trust the Package Certificate. The changes will be merged from the Playbook.

state: INSTALLED
certificateTrustedByUser: true
eula: 
  productEulaAccepted: true # honor EULA Changes
  telemetryEulaAccepted: true  # honor EULA Changes
      

The corresponding Playbook looks like

# Example Playbook to Install PPDM Update
#
# Note: Api is still called Upgrade, but Upgrade is for Hardware only !
# SO tasks will be named update but execute Upgrade
#
- name: Upgrade PPDM
  hosts: localhost
  connection: local
  vars_files: 
    - ./vars/main.yml
  tasks:
  - name: Setting Base URL
    set_fact: 
      ppdm_baseurl: "https://{{ ppdm_fqdn | regex_replace('^https://') }}"     
  - name: Get PPDM Token for https://{{ ppdm_fqdn | regex_replace('^https://') }}
    include_role: 
      name: get_ppdm_token
    vars: 
      ppdm_password: "{{ ppdm_new_password }}"
  - debug: 
      msg: "{{ access_token }}"
      verbosity: 1
    name: do we have a token ?  
  - name: get Update Package
    include_role: 
      name: get_ppdm_update_package
    vars:
      filter: 'category eq "ACTIVE"' 

  - name: get_ppdm_upgrade_by ID
    when: update_package[0] is defined and update_package[0].upgradeError is not defined
    include_role: 
      name: get_ppdm_update_package
    vars:
      id: "{{ update_package[0].id }}"

  - name: merge update body
    vars:
      my_new_config:
        state: INSTALLED
        certificateTrustedByUser: true
        eula: 
          productEulaAccepted: true # honor EULA Changes
          telemetryEulaAccepted: true  # honor EULA Changes
    set_fact:
      update_package: "{{ update_package | default([]) | combine(my_new_config, recursive=True) }}"
  - debug: 
      msg: "{{ update_package }}"
  - name: install_ppdm_upgrade
    when: update_package is defined and update_package.state == "INSTALLED"
    include_role: 
      name: install_ppdm_update_package
    vars:
      id: "{{ update_package.id }}"
      body: "{{ update_package }}"
  - name: Validate Update Started
    when: update_package is defined and update_package.upgradeError is not defined
    include_role: 
      name: get_ppdm_update_package
    vars:
      id: "{{ update_package.id }}"   
  - name: Check Update State  https://{{ ppdm_fqdn | regex_replace('^https://') }}
    include_role:
      name: check_update_done 
      

We can follow the Upgrade Process from the Ansible Playbook:

Update Playbook

And also use the Webbrowser at “https://:14443" to see the Update Status:

Update Status UI
Create DellEMC PowerProtect Datamanager Appliance in Azure using az cli

basic install for test and demo or just in case

Disclaimer: use this at your own Risk

why that ?

Due to an Azure Marketplace API Change we are currently investigating, some combined Marketplace Applications in Azure no longer deploy. This also affect´s DellEMC´s Combined Deployments for Avamar w./DataDomain as well as PowerProtect w./ DataDomain.
This Example will deploy a Fresh PowerProtect Standalone Appliance into a Fresh Resource group for basic testing.
This guide is to give you an understanding on how to accept marketplace terms and deploy anb appliance from a marketplace image using the cli.
I will create a more comprehensive guide for Custom Installs soon.

The Problem lok like this

Uuups, an API Changed. So the blade does not give us a better answer then opening a case.

Marketplace Fail

so with that, we would not be able to deploy the appliance from the Marketplace.

However, we could still use terraform, ARM Templates or … az cli.

So for quick and dirty, I just guide you through an az-cli process:

identifying the template

first of all, we need to get the current image offer for dellemc powerprotect

az vm image list --all --offer ppdm --publisher dellemc
Show images

This will show you all required information to deploy the image. But before we start, we need to accept the Marketplace Image Terms for that image.
For that, we just use az vm image terms accept with the urn:

az vm image terms accept --urn dellemc:ppdm_0_0_1:powerprotect-data-manager-19-7-0-7:19.7.0
image term accept

We also might want to look on what the image includes as image resources:

az vm image show --urn dellemc:ppdm_0_0_1:powerprotect-data-manager-19-7-0-7:19.7.0
image show

Deploying the Image

first of all we need to create a resource group for ppdm, if not deploying to an existing one. in my example, I am going to deploy into a resource group ppdm_from_cli in location germanywestcentral

if you do not ave already created a resource group for you deployment, do so with

export RESOURCE_GROUP="ppdm_from_cli"
export LOCATION="germanywestcentral"
az group create --resource-group ${RESOURCE_GROUP} \
--location ${LOCATION}
az group create

next, we will cerate the VM form the Marketplace image using az vm create. az vm create will create all required resources ( vnet, NIC, NSG, pulicIp ) unless we specify a specific configuration.
Just for the test, we will accept the standard Parameters. In Real World Scenarios, people would deploy to an existing Network that might be managed by CloudAdmin´s.
You will find a more comprehensive Guide on that here soon.
if you do not want a Public IP at this point, add –public-ip-address “”

for now, we just do:

export VM_NAME="ppdm1"
az vm create --resource-group ${RESOURCE_GROUP} \
 --name ${VM_NAME} \
 --image dellemc:ppdm_0_0_1:powerprotect-data-manager-19-7-0-7:19.7.0 \
 --plan-name powerprotect-data-manager-19-7-0-7 \
 --plan-product ppdm_0_0_1 \
 --plan-publisher dellemc \
 --size Standard_D8s_v3

Note: the required for PPDM is a Standard_D8s_v3 ! do not change that !

az vm create

in order to access the vm via Public IP for Configuration, we need to open 443 on the NSG. The default NSG is “${VM_NAME}NSG”

az network nsg rule create --name https \
--nsg-name "${VM_NAME}NSG" \
--resource-group ${RESOURCE_GROUP} \
--protocol Tcp --priority 300 \
--destination-port-range '443'
az nsg rule

Give the System a few moments to but up and configure basic things
Meanwhile, you might want to look at you deployed resources from the Portal:

resources from portal

Try to connect to the appliance on https://[public_ip] after some minutes. it should bring you to the Appliance Fresh Install page

appliance fresh install

You can now proceed to configure the appliance. For this, follow the PowerProtect Data Manager 19.7 Azure Deployment Guide from our support site.

if you want to delete you deployment just use

az group delete --resource-group ${RESOURCE_GROUP}