Data Domain Virtual Edition: Explanation of data disk space utilisation on Data Domain Virtual Edition (DDVE)
Summary: Explanation of data disk space utilisation on Data Domain Virtual Edition (DDVE)
This article applies to
This article does not apply to
This article is not tied to any specific product.
Not all product versions are identified in this article.
Symptoms
Data Domain Virtual Edition (DDVE) is a new product allowing deployment of a Data Domain Restorer (DDR) in a virtual environment. Once deployment has been completed it is necessary to provision data disks for use by the DDFS file system within DDVE. This article explains how physical space on those data disks is used and why the usable space within the DDFS file system might be significantly lower than the combined size of all data disks
Resolution
When adding data disks to an instance of DDVE certain capacity rules must be adhered to:
- The first data disk which is added must be a minimum of 200Gb in size
- All subsequent data disks must be a minimum of 100Gb in size
The reason the first disk must be a minimum of 200Gb in size is that there are substantial overheads on this disk as described below.
Lets assume a 200Gb data disk is presented to DDVE, added to the active tier, and used to create an instance of the DDFS file system. The physical disk will be used as follows:
Initially the disk is partitioned with slice 5 being used for data storage and slice 6 used for ext3 file systems:
Model: Unknown (unknown)
Disk /dev/dm-4: 200GiB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 0.00GiB 0.00GiB 0.00GiB primary
2 0.00GiB 0.00GiB 0.00GiB primary
3 0.00GiB 0.01GiB 0.01GiB primary
4 0.01GiB 0.01GiB 0.00GiB primary
5 0.01GiB 193GiB 193GiB primary <=== Used for data storage
6 193GiB 200GiB 6.77GiB primary <=== Used for ext3
As a result ~193Gb of disk space (slice 5) will be given to the RAID driver for use.
Note, however, that DDVE uses a concept of RAID on LUN (ROL) to protect against certain types of data corruption (for example data corruption which cannot be detected/repaired by the underlying storage array). ROL reserves approximately 5.6% of the space in slice 5 for parity information. As a result RAID will only make ~182.3Gb available for use by DDFS (as shown below - note that each sector is 512 bytes in size):
Array [ppart2] (active): [raid-type 106] [(0x1, 0x30) options] [NVR:N/N] [4608KB stripe] [382362624 sectors] [382362624 total sectors]
[dm-4p5]
The ~182.3Gb space given to DDFS is carved up into blocks of 1075838976 bytes in size - as a result we can create 181 such blocks. The blocks are then allocated to various upper level file systems within DDFS as required. Note that when creating a new instance of DDFS a significant amount of space needs to be allocated for metadata such as the index/summary vector/CP meta/reserved blocks file systems:
FIXED NUM BLOCK
SIZE SIZE BLOCKS SIZE NAME
Yes 194726854656 181 1075838976 /../vpart:/vol2/col1
Yes 194726854656 181 1075838976 /../vpart:/vol2/col1/cp1
No 37654364160 21 1075838976 /../vpart:/vol2/col1/cp1/cset
No 65626177536 61 1075838976 /../vpart:/vol2/col1/cp1/full_indices
No 22592618496 21 1075838976 /../vpart:/vol2/col1/cp1/partial_indices
No 1075838976 1 1075838976 /../vpart:/vol2/col1/cp1/summary.0
No 1075838976 1 1075838976 /../vpart:/vol2/col1/cp1/summary.1
No 1075838976 1 1075838976 /../vpart:/vol2/col1/cp_meta
No 10758389760 10 1075838976 /../vpart:/vol2/reserved_blocks
Note that everything other than the container set (CSET - where user data is stored) consumes 95 * 1075838976 byte blocks. As a result there are 86 blocks remaining for potential use by the CSET. Note that 86 * 1075838976 bytes = ~86.2Gb.
Within the CSET we use a very small amount of space for metadata then estimate that we can use all of the remaining 1075838976 byte blocks on the system for creating 4.5Mb containers. If we check CSET metadata we see:
cm_attrs.psize=4718592 <=== Each container is 4.5Mb
...
cm_attrs.max_containers=17403 <=== Maximum possible number of 'usable' containers
...
cm_attrs.reserved_containers=2176 <=== Reserved containers for internal operations
The total number of containers which can be created within the CSET is 17403 + 2176 = 19579
Each container is 4.5Mb in size so 19579 containers equates to 86.0Gb disk space
Note, however, that reserved containers are for internal use only (by operations such as cleaning) so are not considered when displaying the usable size of the file system to users. Due to this the 'usable' size of the DDFS file system is 17403 * 4.5Mb = ~76.5Gb
For this reason, if a user runs 'filesys show space' after adding a single 200Gb disk and creating an instance of DDFS, they will see that the DDFS file system is only 76.5Gb in size:
Active Tier:
Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB*
---------------- -------- -------- --------- ---- --------------
/data: pre-comp - 9.0 - - -
/data: post-comp 76.5 15.0 61.4 20% 1.1
/ddvar 49.2 1.3 45.4 3% -
/ddvar/core 158.5 0.7 149.7 0% -
---------------- -------- -------- --------- ---- --------------
Note that overheads on subsequent data disks are significantly lower:
- Subsequent disks do not hold ext3 file systems
- DDFS metadata already exists on the first disk so very little is created on subsequent disks
For example, lets assume we add a second 100Gb disk and expand DDFS. On this disk slice 5 will be given to the RAID driver (as on the first disk) but slice 6, whilst it is still created, with only be 4Kb in size:
6 107GB 107GB 4096B primary
As a result practically the whole of the second disk is given to RAID (via slice 5). Raid uses 5.6% of this space for ROL then presents the rest to DDFS - in the following example ~94.3Gb of the 100Gb disk is given to DDFS for use:
Array [ppart3] (active): [raid-type 106] [(0x1, 0x30) options] [NVR:N/N] [4608KB stripe] [197858304 sectors] [197858304 total sectors]
[dm-2p5]
This space is carved up into 1075838976 byte blocks - as a result the system creates an additional 93 blocks for DDFS to use:
FIXED NUM BLOCK
SIZE SIZE BLOCKS SIZE NAME
Yes 294779879424 274 1075838976 /../vpart:/vol1/col1
Yes 294779879424 274 1075838976 /../vpart:/vol1/col1/cp1
No 22592618496 21 1075838976 /../vpart:/vol1/col1/cp1/cset
No 65626177536 61 1075838976 /../vpart:/vol1/col1/cp1/full_indices
No 22592618496 21 1075838976 /../vpart:/vol1/col1/cp1/partial_indices
No 1075838976 1 1075838976 /../vpart:/vol1/col1/cp1/summary.0
No 1075838976 1 1075838976 /../vpart:/vol1/col1/cp1/summary.1
No 2151677952 2 1075838976 /../vpart:/vol1/col1/cp_meta
No 10758389760 10 1075838976 /../vpart:/vol1/reserved_blocks
Note that as all metadata file systems were aready created on the first data disk only a single block is used for metadata on the second disk (via the cp_meta file system). The remainder of the space is made available to the CSET and is considered usable for normal containers:
cm_attrs.max_containers=38379
...
cm_attrs.reserved_containers=2176
Note that 38379 * 4.5Mb = ~168.7Gb:
Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB
---------------- -------- -------- --------- ---- -------------
/data: pre-comp - 0.0 - - -
/data: post-comp 168.7 0.1 168.6 0% 0.0
/ddvar 49.2 0.5 46.2 1% -
/ddvar/core 158.5 0.3 150.1 0% -
---------------- -------- -------- --------- ---- -------------
This shows that overheads are significantly smaller on all but the first data disk:
From the first 200Gb disk DDFS got 76.5Gb usable space
From the second 100Gb data disk DDFS got 92.2Gb usable space
This trend continues for all subsequent data disks.
Finally it should be noted that metadata within DDFS (such as the index file systems) are not fixed in size. Depending on the workload of the system they may need to grow which will take usable space away from the CSET. If this happens then the usable size of the CSET will decrease. This is expected - the total size of the CSET (and size of the DDFS file system as per 'filesys show space') should not be thought of as a static value even if the size of underlying data disks does not change.
- The first data disk which is added must be a minimum of 200Gb in size
- All subsequent data disks must be a minimum of 100Gb in size
The reason the first disk must be a minimum of 200Gb in size is that there are substantial overheads on this disk as described below.
Lets assume a 200Gb data disk is presented to DDVE, added to the active tier, and used to create an instance of the DDFS file system. The physical disk will be used as follows:
Initially the disk is partitioned with slice 5 being used for data storage and slice 6 used for ext3 file systems:
Model: Unknown (unknown)
Disk /dev/dm-4: 200GiB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 0.00GiB 0.00GiB 0.00GiB primary
2 0.00GiB 0.00GiB 0.00GiB primary
3 0.00GiB 0.01GiB 0.01GiB primary
4 0.01GiB 0.01GiB 0.00GiB primary
5 0.01GiB 193GiB 193GiB primary <=== Used for data storage
6 193GiB 200GiB 6.77GiB primary <=== Used for ext3
As a result ~193Gb of disk space (slice 5) will be given to the RAID driver for use.
Note, however, that DDVE uses a concept of RAID on LUN (ROL) to protect against certain types of data corruption (for example data corruption which cannot be detected/repaired by the underlying storage array). ROL reserves approximately 5.6% of the space in slice 5 for parity information. As a result RAID will only make ~182.3Gb available for use by DDFS (as shown below - note that each sector is 512 bytes in size):
Array [ppart2] (active): [raid-type 106] [(0x1, 0x30) options] [NVR:N/N] [4608KB stripe] [382362624 sectors] [382362624 total sectors]
[dm-4p5]
The ~182.3Gb space given to DDFS is carved up into blocks of 1075838976 bytes in size - as a result we can create 181 such blocks. The blocks are then allocated to various upper level file systems within DDFS as required. Note that when creating a new instance of DDFS a significant amount of space needs to be allocated for metadata such as the index/summary vector/CP meta/reserved blocks file systems:
FIXED NUM BLOCK
SIZE SIZE BLOCKS SIZE NAME
Yes 194726854656 181 1075838976 /../vpart:/vol2/col1
Yes 194726854656 181 1075838976 /../vpart:/vol2/col1/cp1
No 37654364160 21 1075838976 /../vpart:/vol2/col1/cp1/cset
No 65626177536 61 1075838976 /../vpart:/vol2/col1/cp1/full_indices
No 22592618496 21 1075838976 /../vpart:/vol2/col1/cp1/partial_indices
No 1075838976 1 1075838976 /../vpart:/vol2/col1/cp1/summary.0
No 1075838976 1 1075838976 /../vpart:/vol2/col1/cp1/summary.1
No 1075838976 1 1075838976 /../vpart:/vol2/col1/cp_meta
No 10758389760 10 1075838976 /../vpart:/vol2/reserved_blocks
Note that everything other than the container set (CSET - where user data is stored) consumes 95 * 1075838976 byte blocks. As a result there are 86 blocks remaining for potential use by the CSET. Note that 86 * 1075838976 bytes = ~86.2Gb.
Within the CSET we use a very small amount of space for metadata then estimate that we can use all of the remaining 1075838976 byte blocks on the system for creating 4.5Mb containers. If we check CSET metadata we see:
cm_attrs.psize=4718592 <=== Each container is 4.5Mb
...
cm_attrs.max_containers=17403 <=== Maximum possible number of 'usable' containers
...
cm_attrs.reserved_containers=2176 <=== Reserved containers for internal operations
The total number of containers which can be created within the CSET is 17403 + 2176 = 19579
Each container is 4.5Mb in size so 19579 containers equates to 86.0Gb disk space
Note, however, that reserved containers are for internal use only (by operations such as cleaning) so are not considered when displaying the usable size of the file system to users. Due to this the 'usable' size of the DDFS file system is 17403 * 4.5Mb = ~76.5Gb
For this reason, if a user runs 'filesys show space' after adding a single 200Gb disk and creating an instance of DDFS, they will see that the DDFS file system is only 76.5Gb in size:
Active Tier:
Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB*
---------------- -------- -------- --------- ---- --------------
/data: pre-comp - 9.0 - - -
/data: post-comp 76.5 15.0 61.4 20% 1.1
/ddvar 49.2 1.3 45.4 3% -
/ddvar/core 158.5 0.7 149.7 0% -
---------------- -------- -------- --------- ---- --------------
Note that overheads on subsequent data disks are significantly lower:
- Subsequent disks do not hold ext3 file systems
- DDFS metadata already exists on the first disk so very little is created on subsequent disks
For example, lets assume we add a second 100Gb disk and expand DDFS. On this disk slice 5 will be given to the RAID driver (as on the first disk) but slice 6, whilst it is still created, with only be 4Kb in size:
6 107GB 107GB 4096B primary
As a result practically the whole of the second disk is given to RAID (via slice 5). Raid uses 5.6% of this space for ROL then presents the rest to DDFS - in the following example ~94.3Gb of the 100Gb disk is given to DDFS for use:
Array [ppart3] (active): [raid-type 106] [(0x1, 0x30) options] [NVR:N/N] [4608KB stripe] [197858304 sectors] [197858304 total sectors]
[dm-2p5]
This space is carved up into 1075838976 byte blocks - as a result the system creates an additional 93 blocks for DDFS to use:
FIXED NUM BLOCK
SIZE SIZE BLOCKS SIZE NAME
Yes 294779879424 274 1075838976 /../vpart:/vol1/col1
Yes 294779879424 274 1075838976 /../vpart:/vol1/col1/cp1
No 22592618496 21 1075838976 /../vpart:/vol1/col1/cp1/cset
No 65626177536 61 1075838976 /../vpart:/vol1/col1/cp1/full_indices
No 22592618496 21 1075838976 /../vpart:/vol1/col1/cp1/partial_indices
No 1075838976 1 1075838976 /../vpart:/vol1/col1/cp1/summary.0
No 1075838976 1 1075838976 /../vpart:/vol1/col1/cp1/summary.1
No 2151677952 2 1075838976 /../vpart:/vol1/col1/cp_meta
No 10758389760 10 1075838976 /../vpart:/vol1/reserved_blocks
Note that as all metadata file systems were aready created on the first data disk only a single block is used for metadata on the second disk (via the cp_meta file system). The remainder of the space is made available to the CSET and is considered usable for normal containers:
cm_attrs.max_containers=38379
...
cm_attrs.reserved_containers=2176
Note that 38379 * 4.5Mb = ~168.7Gb:
Resource Size GiB Used GiB Avail GiB Use% Cleanable GiB
---------------- -------- -------- --------- ---- -------------
/data: pre-comp - 0.0 - - -
/data: post-comp 168.7 0.1 168.6 0% 0.0
/ddvar 49.2 0.5 46.2 1% -
/ddvar/core 158.5 0.3 150.1 0% -
---------------- -------- -------- --------- ---- -------------
This shows that overheads are significantly smaller on all but the first data disk:
From the first 200Gb disk DDFS got 76.5Gb usable space
From the second 100Gb data disk DDFS got 92.2Gb usable space
This trend continues for all subsequent data disks.
Finally it should be noted that metadata within DDFS (such as the index file systems) are not fixed in size. Depending on the workload of the system they may need to grow which will take usable space away from the CSET. If this happens then the usable size of the CSET will decrease. This is expected - the total size of the CSET (and size of the DDFS file system as per 'filesys show space') should not be thought of as a static value even if the size of underlying data disks does not change.
Additional Information
Note that the information contained in this article is valid as of DDOS 5.7.30.0 and may change in subsequent releases.
Affected Products
Data Domain Virtual EditionProducts
Data Domain, Data Domain Virtual EditionArticle Properties
Article Number: 000059680
Article Type: Solution
Last Modified: 05 Sep 2025
Version: 3
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.