Data Domain: How NetWorker works with Data Domain and Cloud Tier
Summary: NetWorker (NW) has support built in for Data Domain Cloud Tier (DD CT). There are misconceptions and terminology clashes which this article addresses.
This article applies to
This article does not apply to
This article is not tied to any specific product.
Not all product versions are identified in this article.
Symptoms
NetWorker (NW) has support built in for Data Domain Cloud Tier (DD CT). This means NW admin can set policies for tiering to cloud: NW marks individual backup images so that later on DD data movement to cloud runs and sends those files to the configured cloud unit.
The first important fact to highlight is that NW does not move or copy (clone) data to cloud. It creates copies (clones) of SSIDs to be sent to cloud, but the clones sit (initially) in the DD Active tier. It is only after the configured DD data-movement schedule kicks in when the SSID marked for (tiered to) cloud are to be sent to cloud storage.
The whole process from taking a backup to the point in which that SSID is available in the cloud works very much like below:
The first important fact to highlight is that NW does not move or copy (clone) data to cloud. It creates copies (clones) of SSIDs to be sent to cloud, but the clones sit (initially) in the DD Active tier. It is only after the configured DD data-movement schedule kicks in when the SSID marked for (tiered to) cloud are to be sent to cloud storage.
The whole process from taking a backup to the point in which that SSID is available in the cloud works very much like below:
Cause
1. NW is configured to store backups to a DD (ingest always occur to the DD Active tier). Typically, it uses a single Storage Unit for that:
2. Within NetWorker, each backup policy is stored under what NetWorker calls a "device". A device is just a subdirectory below the root of the storage unit, for example:
3. NW is eventually configured for DD CT, and results in a data movement app-managed data-movement configuration in the DD such as below:
4. The way NW works for DD CT it does not "mark" files to be moved to cloud on their original location: When customer sets NW for cloud tiering, customer must create another device within the same storage unit, where the files to be sent to cloud will be cloned into by NW first. For example a given SSID that is configured to be stored in the DD CT will appear as two separate (but identical) files in a DD file location report:
5. The NetWorker documentation asks that the Cloud tier device in the NW wizard is created under the same storage unit as the files in the active tier. Creating the cloud device on a different storage unit may cause the clones to be done using other than "fastcopy", which results in much slower cloning times. This is the applicable section in the NW documentation:
https://www.dell.com/support/manuals/en-us/networker/nw_p_ddboostdedup_integration/configuring-networker-devices-for-dd-cloud-tier?guid=guid-680d906f-10c7-4266-822e-1a0f3ba3e201&lang=en-us
6. One of the implications of the statement above from the NW documentation is that in a NetWorker / Data Domain setup with cloud tier usually all data is being stored in the same storage unit within the DD, and there is no supported way to send NW backup images to two separate cloud units. The DD data movement configuration cannot have more than one policy for the same source Mtree, which may pose a problem in situations such as, for example, the capacity for the first cloud unit to be maxed out (see below for an example):
DDBOOST Storage-Unit Show Type BoostFS ------------------------- Name Pre-Comp (GiB) Status User Report Physical BoostFS Size (MiB) -------------------- -------------- ------ ------------- --------------- ------- NW-STORAGE-UNIT 25680542.0 RW boostuser - no -------------------- -------------- ------ ------------- --------------- -------
2. Within NetWorker, each backup policy is stored under what NetWorker calls a "device". A device is just a subdirectory below the root of the storage unit, for example:
/data/col1/NW-STORAGE-UNIT/DAILY-DB-DEV01/00/80/ee140226-00000006-2b625ef6-66625ef6-0e095000-e36c9c56 /data/col1/NW-STORAGE-UNIT/DAILY-DB-DEV02/00/48/f8959462-00000006-3f775a89-66775a89-b8fa5000-e36c9c56 /data/col1/NW-STORAGE-UNIT/MONTHLY-DB-DEV01/03/93/30e0c543-00000006-3d5cac26-665cac26-f6f75000-e36c9c56 /data/col1/NW-STORAGE-UNIT/MONTHLY-FS-DEV06/92/30/05729157-00000006-cc5a6431-665a6431-9e685000-e36c9c56Here the policies or devices would be "DAILY-DB-DEV01", "DAILY-DB-DEV02", "MONTHLY-DB-DEV01" and "MONTHLY-FS-DEV06".
3. NW is eventually configured for DD CT, and results in a data movement app-managed data-movement configuration in the DD such as below:
Cloud Data-Movement Configuration --------------------------------- Mtree Target(Tier/Unit Name) Policy Value ------------------------------- ---------------------- ----------- ------- /data/col1/NW-STORAGE-UNIT Cloud/CLOUD-UNIT app-managed enabled ------------------------------- ---------------------- ----------- -------DD data-movement configuration to cloud takes a source MTree (the NW storage unit), a target cloud unit (CLOUD-UNIT), and a policy, which for NetWorker (and Avamar) must be "app-managed", instead of "age-threshold", as which files to move to cloud are determined (marked) by NW, not chosen by the age of the files themselves.
4. The way NW works for DD CT it does not "mark" files to be moved to cloud on their original location: When customer sets NW for cloud tiering, customer must create another device within the same storage unit, where the files to be sent to cloud will be cloned into by NW first. For example a given SSID that is configured to be stored in the DD CT will appear as two separate (but identical) files in a DD file location report:
# filesys report generate file-location tiering -------------------------------- ---------------------- ----- --------------------------- File Name Location(Unit Name) Size Placement Time -------------------------------- ---------------------- ----- --------------------------- /data/col1/NW-STORAGE-UNIT/MONTHLY-FS-DEV05/85/72/365bdbce-00000006-3157c1bc-6657c1bc-32035000-e36c9c56 Active 1.15 TiB Thu May 30 04:00:59 2024 /data/col1/NW-STORAGE-UNIT/CLOUD-LONG-TERM-DEV04/85/72/365bdbce-00000006-3157c1bc-6657c1bc-32035000-e36c9c56 CLOUD-UNIT 1.15 TiB Sat Jun 1 11:13:33 2024The information above shows that:
- Backup image with long SSID "365bdbce-00000006-3157c1bc-6657c1bc-32035000-e36c9c56" was written to the DD Active tier and last written to in "Thu May 30 04:00:59 2024", under device name "MONTHLY-FS-DEV05"
- There exists a tiering policy (with a target device) named "CLOUD-LONG-TERM-DEV04/"
- When the tiering policy was run (this likely happens as early as the backup completes), a copy (clone) of the SSID was made from the original device into the NW cloud device named "CLOUD-LONG-TERM-DEV04"
- DD data-movement was eventually run and the clone of the original backup was moved from Active to the cloud unit, process completing for the file by "Sat Jun 1 11:13:33 2024"
- At the time the file location information above was collected there exists a copy of the same long SSID in both the Active and the Cloud DD tiers
- It will be up to NW to expire and delete the individual copies when due, in theory, the copy in Active tier will be expired earlier than the one in cloud, which is to be retained for longer (or else would be pointless to send that backup image to cloud in the first place)
5. The NetWorker documentation asks that the Cloud tier device in the NW wizard is created under the same storage unit as the files in the active tier. Creating the cloud device on a different storage unit may cause the clones to be done using other than "fastcopy", which results in much slower cloning times. This is the applicable section in the NW documentation:
https://www.dell.com/support/manuals/en-us/networker/nw_p_ddboostdedup_integration/configuring-networker-devices-for-dd-cloud-tier?guid=guid-680d906f-10c7-4266-822e-1a0f3ba3e201&lang=en-us
Configuring NetWorker devices for DD Cloud Tier ----------------------------------------------- Use the Device Configuration Wizard to configure NetWorker devices for the DD Cloud Tier devices. The Data Domain devices that contain the source backup data must reside on the same mtree as the DD Cloud Tier device that will store the clone data. The storage node that manages the Data Domain devices must be a NetWorker 19.7 storage node.
6. One of the implications of the statement above from the NW documentation is that in a NetWorker / Data Domain setup with cloud tier usually all data is being stored in the same storage unit within the DD, and there is no supported way to send NW backup images to two separate cloud units. The DD data movement configuration cannot have more than one policy for the same source Mtree, which may pose a problem in situations such as, for example, the capacity for the first cloud unit to be maxed out (see below for an example):
Active Tier: Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB* ---------------- -------- --------- --------- ---- -------------- /data: pre-comp - 8477328.0 - - - /data: post-comp 944180.2 769927.8 174252.4 82% 90605.3 ---------------- -------- --------- --------- ---- -------------- Cloud Tier unit-wise space usage -------------------------------- CLOUD-UNIT Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB ---------------- ---------- ---------- --------- ---- ------------- /data: pre-comp - 16935910.0 - - - /data: post-comp 1572768.4* 1572755.0 13.4 100% 0.0 ---------------- ---------- ---------- --------- ---- ------------- Cloud Data-Movement Configuration --------------------------------- Mtree Target(Tier/Unit Name) Policy Value ------------------------ ---------------------- ----------- ------- /data/col1/NW-STORAGE-UNIT Cloud/CLOUD-UNIT app-managed enabled ------------------------ ---------------------- ----------- -------
Resolution
7. When trying to overcome the limitation above, a customer may create a new NetWorker storage unit for upcoming backups. Or they may do the clones from the existing storage unit to a device in the new one, later adding a second DD data movement policy from the new storage unit to the new cloud unit. This ends up with a configuration like below:
Besides the requirement from the NW documentation the problem is that cloning will be slow. Also, you may see something like this in DD SSP (system show performance):
When NW uses "fastcopy" for the clones (which is what happens for clones within the same storage unit), we should not see a read load. Here we see it because the clone across storage units is done by NW through "filecopy", and in the case for this example, it was even worse than that:
8. A case like in the example (NW configuration with a single storage unit and DD CT with the cloud unit already being full) the correct configuration for NW is to create a new Cloud Unit in the DD.
This avoids creating a second storage unit in the DD, and create a new cloud tiering policy in NW to a different device within the same existing storage unit.
Then changing the DD data movement configuration for upcoming data-movement runs to have the new cloud unit as the target.
Final DD side configuration being like this:
When DD data movement runs as scheduled all files in the single NW storage unit will be listed and determined if eligible (marked) for data movement and not in any cloud unit yet. No matter the subdirectory (device) they are within the DD all marked files for data movement not in a cloud unit yet will be individually sent to the target cloud unit (CLOUD-UNIT02) in turn.
After a file is successfully copied to cloud and verified by the DD the file gets "installed", which means the DD changes the file CH (Content Handle) to indicate the physical location of the file (to allow it to locate the data for the file in either the active tier or in any of the two cloud units).
When later on the backup application tries reading or recalling files in the cloud the physical location of the file's data is transparent to NW as the DD knows exactly where to read the data from. This is decoupled from the current DD data movement configuration.
9. Finally, the customer in the example did not follow NW documentation in the beginning (experienced severe NW clone performance issues) and ended up with some SSID stored three times (once in Active, and once in each of the two cloud units), which is perfectly fine (although it may be a waste of space depending on the retention policies configured) :
There are three copies of the same file, two of which have been moved to cloud, one to each of the cloud units.
When NW tries to read from any of them, DD knows where each one is, and transparentlys does as needed to deliver the data back to NW without any difference compared to a situation with just an Active tier.
Each of the three files is eventually expired and deleted by NW.
There are three copies of the same file, two of which have been moved to cloud, one to each of the cloud units.
When NW tries to read from any of them the DD knows exactly where each one of them is, and will transparently do as needed to deliver the data back to NW without any difference compared to a situation with just an Active tier.
Each one of the three files will eventually be expired (and deleted) by NW.
Cloud Unit List --------------- Name Profile Status Reason ------------ ------------------ ------ ------------------------------- CLOUD-UNIT CLOUD-UNIT_profile Active Cloud unit connected and ready. <<-- existing Cloud Unit CLOUD-UNIT02 CLOUD-UNIT_profile Active Cloud unit connected and ready. <<-- new Cloud Unit ------------ ------------------ ------ ------------------------------- Cloud Data-Movement Configuration --------------------------------- Mtree Target(Tier/Unit Name) Policy Value ------------------------------- ---------------------- ----------- ------- /data/col1/NW-STORAGE-UNIT Cloud/CLOUD-UNIT app-managed enabled /data/col1/NW-STORAGE-UNIT-NEW Cloud/CLOUD-UNIT02 app-managed enabled ------------------------------- ---------------------- ----------- -------
Besides the requirement from the NW documentation the problem is that cloning will be slow. Also, you may see something like this in DD SSP (system show performance):
-----------Throughput (MB/s)----------- ---------------Protocol----------------- Compression ------Cache Miss-------- -----------Streams----------- -MTree Active- ----State--- -----Utilization----- --Latency-- Repl Date Time Read Write Repl Network Repl Pre-comp ops/s load data(MB/s) wait(ms/MB) gcomp lcomp thra unus ovhd data meta rd/ wr/ r+/ w+/ in/ out rd/wr 'CDPVMSFIRL' CPU disk in ms stream ---------- -------- ----- ----- ----in/out--- ----in/out--- ----- --%-- --in/out--- --in/out--- ----- ----- ---- ---- ---- ---- ---- ----------------------------- -------------- ----------- -avg/max---- --max--- --avg/sdev- ----------- 2024/06/13 18:27:00 0.0 0.0 0.00/ 0.00 0.00/ 0.00 2 NaN% 0.00/ 0.00 20.97/ 37.46 1.7 1.7 0% 0% 21% 0% 2% 0/ 0/ 0/ 0/ 0/ 0 0/ 0 ---V---I--- 6%/ 8%[13] 7%[28] 2.1/ 7.5 116.0/ 2.2 2024/06/13 18:37:00 0.0 0.0 0.00/ 0.00 0.00/ 0.00 89 NaN% 0.01/ 0.01 19.45/ 68.12 2.8 2.1 0% 0% 21% 0% 1% 0/ 0/ 0/ 0/ 0/ 0 0/ 0 ---V---I--- 5%/ 6%[4] 7%[28] 0.7/ 3.7 115.8/ 3.0 2024/06/13 18:47:00 39.6 39.5 0.00/ 0.00 0.00/ 0.00 1160 NaN% 0.54/ 37.82 4.27/ 0.42 62.5 1.7 0% 0% 11% 0% 1% 1/ 1/ 0/ 0/ 0/ 0 1/ 1 ---V---I--- 5%/ 7%[4] 7%[28] 0.4/ 3.0 118.8/ 3.4 2024/06/13 18:57:00 215.5 215.5 0.00/ 0.00 0.00/ 0.00 825 NaN% 0.93/205.66 4.29/ 0.30 291.2 1.2 0% 0% 7% 0% 1% 1/ 1/ 0/ 0/ 0/ 0 1/ 1 ---V---I--- 7%/ 9%[14] 8%[28] 0.1/ 3.8 118.8/ 3.7 2024/06/13 19:07:00 223.9 223.9 0.00/ 0.00 0.00/ 0.00 856 NaN% 0.94/213.74 4.32/ 0.29 327.5 1.1 0% 0% 7% 0% 1% 1/ 1/ 0/ 0/ 0/ 0 1/ 1 ---V---I--- 7%/ 9%[14] 8%[28] 0.1/ 0.8 118.5/ 4.4 2024/06/13 19:17:00 218.5 218.5 0.00/ 0.00 0.00/ 0.00 1916 NaN% 1.01/208.56 5.34/ 0.32 278.3 1.3 0% 0% 9% 0% 1% 1/ 1/ 0/ 0/ 0/ 0 1/ 1 ---V---I--- 7%/ 9%[4] 8%[28] 0.2/ 3.7 118.2/ 3.6 2024/06/13 19:27:00 174.3 174.3 0.00/ 0.00 0.00/ 0.00 696 NaN% 2.25/166.37 2.02/ 0.30 64.7 1.5 0% 1% 19% 0% 1% 1/ 1/ 0/ 0/ 0/ 0 0/ 2 ---V---I--- 8%/ 12%[13] 9%[28] 0.4/ 6.5 121.5/ 4.6 2024/06/13 19:37:00 182.6 183.5 0.00/ 0.00 0.00/ 0.00 719 NaN% 5.40/174.31 1.24/ 0.29 34.8 1.1 2% 6% 28% 0% 3% 1/ 3/ 0/ 0/ 0/ 0 0/ 2 ---V---I--- 8%/ 11%[43] 12%[28] 0.3/ 6.0 121.8/ 6.9 ... 2024/06/20 15:39:00 150.4 293.6 0.00/ 0.00 0.00/ 0.00 6716 NaN% 25.47/146.12 1.39/ 0.59 11.8 1.0 1% 0% 19% 0% 4% 0/ 2/ 0/ 0/ 0/ 0 0/ 2 ---V---I--- 7%/ 13%[15] 5%[14] 0.2/ 1.0 119.2/ 4.0 2024/06/20 15:49:00 215.9 298.8 0.00/ 0.00 0.00/ 0.00 12448 NaN% 31.55/212.33 1.60/ 0.65 9.8 1.0 2% 0% 0% 0% 2% 0/ 2/ 0/ 0/ 0/ 0 0/ 2 ---V---I--- 8%/ 15%[15] 4%[21] 0.2/ 0.5 117.5/ 2.7 2024/06/20 15:59:00 186.5 344.3 0.00/ 0.00 0.00/ 0.00 1854 NaN% 24.07/178.14 1.04/ 0.33 14.6 1.0 5% 0% 21% 0% 2% 1/ 2/ 0/ 0/ 0/ 0 0/ 2 ---V---I--- 6%/ 14%[15] 5%[ 3] 0.4/ 4.4 119.2/ 2.3 2024/06/20 16:09:00 205.3 426.2 0.00/ 0.00 0.00/ 0.00 808 NaN% 18.73/196.08 1.09/ 0.30 24.4 1.0 4% 5% 27% 0% 3% 0/ 2/ 0/ 0/ 0/ 0 0/ 2 ---V---I--- 6%/ 14%[47] 5%[ 3] 0.3/ 0.3 117.5/ 2.6 2024/06/20 16:19:00 211.7 399.3 0.00/ 0.00 0.00/ 0.00 843 NaN% 17.86/202.10 1.09/ 0.29 23.5 1.0 2% 0% 0% 0% 2% 1/ 2/ 0/ 0/ 0/ 0 0/ 2 ---V---I--- 6%/ 13%[15] 3%[ 3] 0.3/ 4.8 119.2/ 1.8 ... 2024/06/23 18:21:00 470.3 484.6 0.00/ 0.00 0.00/ 0.00 1807 NaN% 9.85/448.86 2.59/ 0.30 50.2 1.1 1% 21% 34% 0% 0% 4/ 5/ 0/ 0/ 0/ 0 0/ 2 ---V---I--- 7%/ 11%[9] 9%[13] 0.2/ 1.5 126.8/ 8.8 2024/06/23 18:31:00 477.2 494.6 0.00/ 0.00 0.00/ 0.00 1846 NaN% 8.99/455.44 2.69/ 0.29 57.3 1.1 1% 13% 27% 0% 0% 4/ 5/ 0/ 0/ 0/ 0 0/ 2 -------I--- 5%/ 9%[0] 7%[17] 0.2/ 1.7 127.0/ 6.8 2024/06/23 18:41:00 458.9 481.3 0.00/ 0.00 0.00/ 0.00 2913 NaN% 10.17/438.07 2.57/ 0.30 48.3 1.1 1% 12% 28% 0% 0% 4/ 5/ 0/ 0/ 0/ 0 0/ 2 -------I--- 5%/ 8%[0] 7%[ 3] 0.2/ 1.2 127.0/ 5.8 2024/06/23 18:51:00 468.7 481.2 0.00/ 0.00 0.00/ 0.00 1807 NaN% 10.60/447.40 2.27/ 0.29 46.5 1.1 1% 19% 34% 0% 1% 4/ 5/ 0/ 0/ 0/ 0 0/ 2 -------I--- 7%/ 9%[9] 9%[ 3] 0.2/ 1.5 127.0/ 5.6 2024/06/23 19:01:00 474.0 485.5 0.00/ 0.00 0.00/ 0.00 1814 NaN% 14.32/452.44 1.99/ 0.29 33.6 1.1 2% 26% 39% 0% 0% 4/ 5/ 0/ 0/ 0/ 0 0/ 2 ---V---I--- 6%/ 10%[15] 11%[ 3] 0.2/ 1.5 126.2/ 6.4
When NW uses "fastcopy" for the clones (which is what happens for clones within the same storage unit), we should not see a read load. Here we see it because the clone across storage units is done by NW through "filecopy", and in the case for this example, it was even worse than that:
06/19 08:24:14.178358 [7fbbf6b4a390] ddboost-<nw-node1.example.com-52424>: JOB START IMAGE_READ ip=10.20.30.40 pid=1727382 cd=1 enc=off //NW-STORAGE-UNIT/MONTHLY-FS-DEV06/03/54/d2b98e7a-00000006-4f5a1067-665a1067-88e55000-e36c9c56 06/19 08:24:14.286608 [7facd6691b70] ddboost-<nw-node2.example.com-58788>: JOB START IMAGE_WRITE ip=10.20.30.40 pid=18392 cd=1 enc=off //NW-STORAGE-UNIT-CT/CLOUD-LONG-TERM02-DEV07/03/54/d2b98e7a-00000006-4f5a1067-665a1067-88e55000-e36c9c56Here a clone for SSID "d2b98e7a-00000006-4f5a1067-665a1067-88e55000-e36c9c56" is happening:
- From device "MONTHLY-FS-DEV06" in the original storage unit named (NW-STORAGE-UNIT), the READ job being done from NW node named "nw-node1.example.com"
- To the device named "CLOUD-LONG-TERM02-DEV07" under the new storage unit (NW-STORAGE-UNIT-CT), the WRITE job being done to NW node named "nw-node2.example.com"
8. A case like in the example (NW configuration with a single storage unit and DD CT with the cloud unit already being full) the correct configuration for NW is to create a new Cloud Unit in the DD.
This avoids creating a second storage unit in the DD, and create a new cloud tiering policy in NW to a different device within the same existing storage unit.
Then changing the DD data movement configuration for upcoming data-movement runs to have the new cloud unit as the target.
Final DD side configuration being like this:
Cloud Unit List --------------- Name Profile Status Reason ------------ ------------------ ------ ------------------------------- CLOUD-UNIT CLOUD-UNIT_profile Active Cloud unit connected and ready. <<-- existing Cloud Unit CLOUD-UNIT02 CLOUD-UNIT_profile Active Cloud unit connected and ready. <<-- new Cloud Unit ------------ ------------------ ------ ------------------------------- Cloud Data-Movement Configuration --------------------------------- Mtree Target(Tier/Unit Name) Policy Value ------------------------------- ---------------------- ----------- ------- /data/col1/NW-STORAGE-UNIT Cloud/CLOUD-UNIT02 app-managed enabled <<-- target cloud unit changed from "CLOUD-UNIT" ------------------------------- ---------------------- ----------- -------When both the existing and the new NW tiering policies run they will create clones of the savesets to be sent to cloud and the clones will be made within the same storage unit under a different device (subdirectory), and the files within the NW "cloud devices" will be marked for data movement.
When DD data movement runs as scheduled all files in the single NW storage unit will be listed and determined if eligible (marked) for data movement and not in any cloud unit yet. No matter the subdirectory (device) they are within the DD all marked files for data movement not in a cloud unit yet will be individually sent to the target cloud unit (CLOUD-UNIT02) in turn.
After a file is successfully copied to cloud and verified by the DD the file gets "installed", which means the DD changes the file CH (Content Handle) to indicate the physical location of the file (to allow it to locate the data for the file in either the active tier or in any of the two cloud units).
When later on the backup application tries reading or recalling files in the cloud the physical location of the file's data is transparent to NW as the DD knows exactly where to read the data from. This is decoupled from the current DD data movement configuration.
9. Finally, the customer in the example did not follow NW documentation in the beginning (experienced severe NW clone performance issues) and ended up with some SSID stored three times (once in Active, and once in each of the two cloud units), which is perfectly fine (although it may be a waste of space depending on the retention policies configured) :
-------------------------------- ---------------------- ----- --------------------------- File Name Location(Unit Name) Size Placement Time -------------------------------- ---------------------- ----- --------------------------- /data/col1/NW-STORAGE-UNIT/MONTHLY-FS-DEV05/85/72/365bdbce-00000006-3157c1bc-6657c1bc-32035000-e36c9c56 Active 1.15 TiB Thu May 30 04:00:59 2024 /data/col1/NW-STORAGE-UNIT/CLOUD-LONG-TERM-DEV04 /85/72/365bdbce-00000006-3157c1bc-6657c1bc-32035000-e36c9c56 CLOUD-UNIT 1.15 TiB Sat Jun 1 11:13:33 2024 /data/col1/NW-STORAGE-UNIT-CT/CLOUD-LONG-TERM02-DEV07/85/72/365bdbce-00000006-3157c1bc-6657c1bc-32035000-e36c9c56 CLOUD-UNIT02 1.15 TiB Tue Jun 18 10:49:10 2024
There are three copies of the same file, two of which have been moved to cloud, one to each of the cloud units.
When NW tries to read from any of them, DD knows where each one is, and transparentlys does as needed to deliver the data back to NW without any difference compared to a situation with just an Active tier.
Each of the three files is eventually expired and deleted by NW.
There are three copies of the same file, two of which have been moved to cloud, one to each of the cloud units.
When NW tries to read from any of them the DD knows exactly where each one of them is, and will transparently do as needed to deliver the data back to NW without any difference compared to a situation with just an Active tier.
Each one of the three files will eventually be expired (and deleted) by NW.
Affected Products
Data DomainArticle Properties
Article Number: 000226881
Article Type: Solution
Last Modified: 19 Aug 2024
Version: 2
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.