Article Number: 465216

printer Print mail Email

Data Domain Operating System (DDOS) physical capacity measurement/reporting (PCM/PCR) frequently asked questions

Summary: Data Domain Operating System (DDOS) physical capacity measurement/reporting (PCM/PCR) frequently asked questions

Primary Product: Data Domain

Product: Data Domain

Last Published: 08 May 2020

Article Type: Break Fix

Published Status: Online

Version: 9

Data Domain Operating System (DDOS) physical capacity measurement/reporting (PCM/PCR) frequently asked questions

Article Content

Issue


Version 5.7 of the Data Domain Operating System (DDOS) introduces new functionality known as physical capacity management (PCM) or physical capacity reporting (PCR).

This article describes common use cases and questions around this feature. Note that PCM and PCR are used interchangeably in this document.
Cause
Resolution
What is Physical Capacity Measurement (PCM)?

PCM is a new feature supported in DDOS 5.7 and later which allows calculation of accurate physical disk utilisation by a directory tree, collection of directory trees, mtree, or a collection of mtrees.

How does this differ from features in previous releases of DDOS?

When a file is ingested on a DDR we record various statistics about the file. One such statistic is 'post-lc bytes' or the physical amount of space taken by a file when written to the system. We can view post-lc bytes for a file or directory tree using the 'filesys show compression' command - for example:

sysadmin@dd9500# filesys show compression /data/col1/jf1
Total files: 4;  bytes/storage_used: 1.3
       Original Bytes:        4,309,378,324
  Globally Compressed:        3,242,487,836
   Locally Compressed:        3,293,594,658
            Meta-data:           13,897,112


This indicates that the above directory tree contains 4 files which, in total, used 3,293,594,658 bytes (3.07Gb) of physical space when ingested.

Note, however, that these statistics are generated at the time of ingest and are not updated after this time. Due to the nature of de-duplication, however, as additional files are ingested/deleted and cleaning run, the way in which data on disk is de-duplicated against and as such the way each file de-duplicates (and the amount of data is 'owns') changes. Due to this the above statistics become stale over time and in some cases/workloads can become extremely inaccurate.

PCM is an effort to avoid inconsistent results caused by the above statistics becoming stale. As PCM is able to generate reports of physical disk utilisation at a specific point in time the above limitations no longer apply and results are guaranteed to be significantly more accurate.

Are any additional licenses required for PCM?

No - PCM is not a licensed feature and as a result no additional licenses are required to use PCM.

Is PCM support in all platforms?

No - PCM is supported on all Hardware and Virtual DataDomain appliances(DDVE), except on ATOS (Active Tier on Object Storage) DDVEs. 

Are there any other pre-requisites required before PCM can be used?

By default PCM is disabled in DDOS 5.7. Before it can be used it must be enabled and its cache initialised as shown below:

sysadmin@dd9500# compression physical-capacity-measurement enable and-initialize
physical-capacity-measurement enabled. Initialization started.


Note that the PCM cache is used to speed future PCM jobs and initialisation of the cache can take considerable time. Despite this PCM jobs can start to be queued whilst the PCM cache is being initialised.

How does PCM calculate usage totals?

PCM utilises mtree snapshots to determine physical utilisation for a group of files. As a result, when a PCM job starts the following will happen:

- An mtree snapshot is created against underlying mtrees. Note that this snapshot will be named pcr_snap_*, i.e.:

sysadmin@dd9500# snapshot list mtree /data/col1/jf2
Snapshot Information for MTree: /data/col1/jf2
----------------------------------------------
Name                                Pre-Comp (GiB)   Create Date         Retain Until        Status
---------------------------------   --------------   -----------------   -----------------   -------
pcr_snap_1440284055_1440360259_19              6.0   Aug 23 2015 13:04   Dec 31 1969 16:00   expired
---------------------------------   --------------   -----------------   -----------------   -------


- PCM finds files from the snapshot which are to be included in the PCM job (i.e. are in the pathsets/mtrees specified)
- PCM will walk the segment tree of these files to essentially build a list of unique segment fingerprints referenced by all of the files
- PCM will then find corresponding segments on disk (within the container set) and calculate the sum of the size of those segments
- Note that the sum of the size of these segments represents the current physical disk utilisation by the corresponding files
- In addition to the above the pre-compressed size of the set of files can be found from corresponding file metadata
- Once PCM jobs complete underlying PCM snapshots are expired for later removal

How do PCM jobs work?

PCM jobs are submitted by a user (or via a schedule) and are added to a PCM work queue. Depending on system workload PCM jobs may then be picked from the queue and started immediately or may be deferred for a period of time.

Examples of why PCM jobs may be deferred are as follows:

- Active tier clean is running on the system - PCM jobs and active tier clean cannot run in parallel. As a result PCM jobs queued whilst active tier clean is running will be deferred until active tier clean completes

- There are already a number of PCM jobs running against underlying mtrees - PCM utilises mtree snapshots and there are strict limits on how many PCM snapshots a given user can create at a given time against a single mtree. If these limits are to be exceeded by a new PCM job the job will be deferred until existing jobs complete

Is it possible to control the resources used by PCM on a system?

PCM uses a throttling mechanism which is similar to that used by active tier clean, i.e. the PCM throttle can be set from 0 (not aggressive) to 100 (very aggressive). Obviously the higher the throttle the more resources will be used by PCM and the larger impact PCM jobs may have on other workload on the system.

By default the PCM throttle is set to 20, i.e.:

sysadmin@dd9500# compression physical-capacity-measurement throttle show
Throttle is set to 20 percent (default).


PCM throttle can be modified as follows with the change to throttle taking place immediately (i.e. no DDFS restart is required for PCM to pick up the new throttle setting):

sysadmin@dd9500# compression physical-capacity-measurement throttle set 50
Throttle set to 50 percent.


What are pathsets?

PCM jobs can be run in two ways, i.e.:

- Against a pre-defined 'pathset' (i.e. user specified collection of directories)
- Against a single mtree

Before jobs can be run against a given pathset the pathset must be created/defined as follows:

sysadmin@dd9500# compression physical-capacity-measurement pathset create jfall paths /data/col1/jf1,/data/col1/jf2
Pathset "jfall" created.


Note that specific directories can be added to or removed from an existing pathset as follows:

sysadmin@dd9500# compression physical-capacity-measurement pathset del jfall paths /data/col1/jf2
Path(s) deleted from pathset "jfall".
sysadmin@dd9500# compression physical-capacity-measurement pathset add jfall paths /data/col1/jf2
Path(s) added to pathset "jfall".


All pathsets which have been created can be displayed as follows:

sysadmin@dd9500# compression physical-capacity-measurement pathset show list
Pathset           Number of paths   Measurement-retention (days)
---------------   ---------------   ----------------------------
jf1                             1                            180
jf2                             1                            180
jfall                           2                            180
phys-gandhi3                    1                            180
phys-gandhi5-fc                 1                            180
phys-gandhi5                    1                            180
phys2-gandhi3                   2                            180
---------------   ---------------   ----------------------------
7 pathset(s) found.


To view specific paths defined within a pathset the 'pathset show detailed' command can be used:

sysadmin@dd9500# compression physical-capacity-measurement pathset show detailed jfall
Pathset: jfall
    Number of paths: 2
    Measurement-retention: 180 day(s)
    Paths:
        /data/col1/jf1
        /data/col1/jf2
sysadmin@dd9500#


To delete a pathset the 'pathset destroy' command can be used:

sysadmin@dd9500# compression physical-capacity-measurement pathset destroy jfall

Note, however, that this will remove all history for the given pathset.

Note that ad-hoc jobs against a single mtree do not need to have a pathset defined before being run.

How is a PCM job started?

A new PCM job can be submitted to the PCM work queue by using the 'sample start' command, i.e.:

sysadmin@dd9500# compression physical-capacity-measurement sample start pathsets jfall
Measurement task(s) submitted and will begin as soon as resources are available.


In the above example a pre-defined pathset was used. To submit a PCM job against a single mtree the mtree can simply be specified, i.e.:

sysadmin@dd9500# compression physical-capacity-measurement sample start mtrees /data/col1/backup
Measurement task(s) submitted and will begin as soon as resources are available.


By default PCM jobs are submitted with a priority of 'normal'. It is also possible, however, to specify a priority of urgent:

sysadmin@dd9500# compression physical-capacity-measurement sample start pathsets jf1 priority urgent
Measurement task(s) submitted and will begin as soon as resources are available.


Jobs with priority of 'urgent' will be queued ahead of those with priority of 'normal' (meaning they will be picked up and worked in preference to any submitted jobs of priority 'normal').

A list of currently submitted/running jobs can be displayed using the 'sample show current' command, for example:

sysadmin@dd9500# compression physical-capacity-measurement sample show current
Task ID       Type   Name    User       State       Creation Time         Measurement Time      Start Time   Priority   Percent
                                                                          (Submitted Time)                              Done
-----------   ----   -----   --------   ---------   -------------------   -------------------   ----------   --------   --------
47244640259   PS     jf2     sysadmin   Scheduled   2015/08/23 12:24:12   2015/08/23 12:24:12   --           Urgent     0
47244640258   PS     jf1     sysadmin   Scheduled   2015/08/23 12:24:09   2015/08/23 12:24:09   --           Urgent     0
47244640257   PS     jfall   sysadmin   Scheduled   2015/08/23 12:23:06   2015/08/23 12:23:06   --           Normal     0
-----------   ----   -----   --------   ---------   -------------------   -------------------   ----------   --------   --------
sysadmin@dd9500#


Can PCM jobs be scheduled?

Yes - if a specific PCM job needs to be run regularly it can be scheduled to run automatically as required. For example:

sysadmin@dd9500# compression physical-capacity-measurement schedule create jf_sched pathsets jfall,jf1,jf2 time 1400
Schedule "jf_sched" created.


Note that schedules can be created to run daily, on specific days of the week, or certain days of each month.

An existing schedule can be modified using the 'schedule modify' command:

sysadmin@dd9500# compression physical-capacity-measurement schedule modify jf_sched priority urgent time 1700 day Wed,Fri
Schedule "jf_sched" modified.


In addition an existing schedule can have pathsets added/removed as follows:

sysadmin@dd9500# compression physical-capacity-measurement schedule del jf_sched pathsets jf2
Schedule "jf_sched" modified.
sysadmin@dd9500# compression physical-capacity-measurement schedule add jf_sched pathsets jf2
Schedule "jf_sched" modified.


Note that a schedule cannot only contain pathsets OR mtrees (i.e. the two cannot be mixed):

sysadmin@dd9500# compression physical-capacity-measurement schedule create jf_sched2 mtrees /data/col1/backup time 1400
Schedule "jf_sched2" created.
sysadmin@dd9500# compression physical-capacity-measurement schedule add jf_sched2 pathsets jfall
**** Failed to add: this schedule is only for mtrees.


To view details of existing schedules the 'schedule show all' command can be used, for example:

sysadmin@dd9500# compression physical-capacity-measurement schedule show all
Name:      jf_sched
Status:    enabled
Priority:  urgent
Frequency: weekly on Wed, Fri
Time:      17:00
Pathset(s):
    jfall
    jf1
    jf2

Name:      jf_sched2
Status:    enabled
Priority:  normal
Frequency: daily
Time:      14:00
MTree(s):
    /data/col1/backup


Existing schedules can be disabled or enabled on the fly, i.e.:

sysadmin@dd9500# compression physical-capacity-measurement schedule disable jf_sched2
Schedule "jf_sched2" disabled.
sysadmin@dd9500# compression physical-capacity-measurement schedule enable jf_sched2
Schedule "jf_sched2" enabled.


A schedule can also be destroyed:

sysadmin@dd9500# compression physical-capacity-measurement schedule destroy jf_sched2
Schedule "jf_sched2" destroyed.


Note that this will NOT remove history for the corresponding mtrees/pathsets (it just means that new PCM jobs will not be automatically scheduled)

How are scheduled jobs started?

When a PCM schedule is added and enabled this will cause a corresponding entry to be added to /etc/crontab, i.e.:

#
# collection.1.crontab.pcr.jf_sched.0
#
00 17 * * Wed,Fri  root /ddr/bin/ddsh -a compression physical-capacity-measurement sample start force priority urgent objects-from-schedule jf_sched


Note that the cron job will be removed from /etc/crontab if the schedule is disabled or destroyed

Can I abort a running PCM job?

Yes - running PCM jobs can be aborted using either the task id or pathset/mtree names. For example we see that we have two PCM jobs queued:

SE@dd9500## compression physical-capacity-measurement sample show current
Task ID        Type   Name    User       State       Creation Time         Measurement Time      Start Time   Priority   Percent
                                                                           (Submitted Time)                              Done
------------   ----   -----   --------   ---------   -------------------   -------------------   ----------   --------   --------
124554051585   PS     jfall   sysadmin   Scheduled   2015/08/30 16:00:48   2015/08/30 16:00:48   --           Normal     0
124554051586   PS     jfall   sysadmin   Scheduled   2015/08/30 16:01:55   2015/08/30 16:01:55   --           Normal     0
------------   ----   -----   --------   ---------   -------------------   -------------------   ----------   --------   --------


These jobs can be aborted using either the task-id (to abort a single job):

SE@dd9500## compression physical-capacity-measurement sample stop task-id 124554051585
**   This will abort any submitted or running compression physical-capacity-measurement sampling tasks.
        Do you want to proceed? (yes|no) [no]: yes
1 task(s) aborted.


Leaving us with a single running job:

SE@dd9500## compression physical-capacity-measurement sample show current
Task ID        Type   Name    User       State       Creation Time         Measurement Time      Start Time   Priority   Percent
                                                                           (Submitted Time)                              Done
------------   ----   -----   --------   ---------   -------------------   -------------------   ----------   --------   --------
124554051586   PS     jfall   sysadmin   Scheduled   2015/08/30 16:01:55   2015/08/30 16:01:55   --           Normal     0
------------   ----   -----   --------   ---------   -------------------   -------------------   ----------   --------   --------


Or pathset name:

SE@dd9500## compression physical-capacity-measurement sample stop pathsets jfall
**   This will abort any submitted or running compression physical-capacity-measurement sampling tasks.
        Do you want to proceed? (yes|no) [no]: yes
1 task(s) aborted.


Leaving us with no jobs:

SE@dd9500## compression physical-capacity-measurement sample show current
No measurement tasks found.


How can details of completed jobs be displayed?

Details of completed jobs can be viewed with the 'sample show history' command. For example to show details for a single pathset:

SE@dd9500## compression physical-capacity-measurement sample show history pathset jfall
Pathset: jfall
Measurement Time      Logical Used   Physical Used   Global-Comp   Local-Comp       Total-Comp
                        (Pre-Comp)     (Post-Comp)        Factor       Factor           Factor
                             (GiB)           (GiB)                               (Reduction %)
-------------------   ------------   -------------   -----------   ----------   --------------
2015/08/23 12:23:06            7.0             4.2         1.70x        0.98x   1.67x (40.24%)
2015/08/23 13:04:20           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
2015/08/26 14:00:01           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
2015/08/27 14:00:01           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
2015/08/28 14:00:02           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
2015/08/29 14:00:02           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
2015/08/30 14:00:01           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
-------------------   ------------   -------------   -----------   ----------   --------------
Total number of measurements retrieved = 7.


The detailed-history parameter also shows start/end times of each job:

SE@dd9500## compression physical-capacity-measurement sample show detailed-history pathset jfall
Pathset: jfall
Measurement Time      Logical Used   Physical Used   Global-Comp   Local-Comp       Total-Comp   Task ID        Task Start Time       Task End Time
                        (Pre-Comp)     (Post-Comp)        Factor       Factor           Factor
                             (GiB)           (GiB)                               (Reduction %)
-------------------   ------------   -------------   -----------   ----------   --------------   ------------   -------------------   -------------------
2015/08/23 12:23:06            7.0             4.2         1.70x        0.98x   1.67x (40.24%)   47244640257    2015/08/23 12:25:19   2015/08/23 12:25:23
2015/08/23 13:04:20           10.0             6.2         1.63x        0.98x   1.61x (37.84%)   51539607553    2015/08/23 13:05:45   2015/08/23 13:05:48
2015/08/26 14:00:01           10.0             6.2         1.63x        0.98x   1.61x (37.84%)   77309411329    2015/08/26 14:02:50   2015/08/26 14:02:50
2015/08/27 14:00:01           10.0             6.2         1.63x        0.98x   1.61x (37.84%)   85899345921    2015/08/27 14:03:06   2015/08/27 14:03:06
2015/08/28 14:00:02           10.0             6.2         1.63x        0.98x   1.61x (37.84%)   94489280513    2015/08/28 14:02:50   2015/08/28 14:02:51
2015/08/29 14:00:02           10.0             6.2         1.63x        0.98x   1.61x (37.84%)   103079215105   2015/08/29 14:01:40   2015/08/29 14:01:41
2015/08/30 14:00:01           10.0             6.2         1.63x        0.98x   1.61x (37.84%)   115964116993   2015/08/30 14:04:12   2015/08/30 14:04:12
-------------------   ------------   -------------   -----------   ----------   --------------   ------------   -------------------   -------------------
Total number of measurements retrieved = 7.


Note that either command can modified to only retrieve results over a specific time period:

SE@dd9500## compression physical-capacity-measurement sample show history pathset jfall last 2days
Pathset: jfall
Measurement Time      Logical Used   Physical Used   Global-Comp   Local-Comp       Total-Comp
                        (Pre-Comp)     (Post-Comp)        Factor       Factor           Factor
                             (GiB)           (GiB)                               (Reduction %)
-------------------   ------------   -------------   -----------   ----------   --------------
2015/08/29 14:00:02           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
2015/08/30 14:00:01           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
-------------------   ------------   -------------   -----------   ----------   --------------
Total number of measurements retrieved = 2.


Or between specific dates/times:

SE@dd9500## compression physical-capacity-measurement sample show history pathset jfall start 08231010 end 08231400
Pathset: jfall
Measurement Time      Logical Used   Physical Used   Global-Comp   Local-Comp       Total-Comp
                        (Pre-Comp)     (Post-Comp)        Factor       Factor           Factor
                             (GiB)           (GiB)                               (Reduction %)
-------------------   ------------   -------------   -----------   ----------   --------------
2015/08/23 12:23:06            7.0             4.2         1.70x        0.98x   1.67x (40.24%)
2015/08/23 13:04:20           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
-------------------   ------------   -------------   -----------   ----------   --------------
Total number of measurements retrieved = 2.


How long is a specific history kept for a specific pathset/mtree?

By default the results of each PCM pathset is kept for 180 days. This can be changed, however, by modification of the corresponding pathset:

SE@dd9500## compression physical-capacity-measurement pathset modify jfall measurement-retention 14
Measurement-retention changed to 14 day(s).


PCM history is held in the systems historical database. As a result if the historical database is lost/damaged details of all PCM history will also be lost.

Are there any caveats to be aware of when using PCM?

PCM jobs will be suspended if system has less than 10% available space. 
PCM jobs will be suspended while the cleaning cycle is running. 

As previously stated, PCM is a tool used to calculate physical utilisation by a specific set of directories/mtrees. When calculating results for a specific pathset or set of mtrees PCM will only count the size of each unique segment of data used by the pathset or set of mtrees once. Note, however, that due to the nature of de-duplication there may be other files outside of the pathsets/mtrees against which the job is being run which may also de-duplicate against the same data.

As a result of this if files de-duplicating against the same data are included in multiple different PCM jobs each of the segments making up the files may be counted multiple times (once by each PCM job). This means that whilst the results of each individual PCM job are accurate, the results of multiple PCM jobs cannot be summed to give accurate physical utilisation for the sum of pathsets/mtrees in the jobs.

Due to the way PCM jobs put together the pathset or MTree post-comp space used on disk:
  • PCM jobs don't account for dead data (disk space used by deleted files, which space on disk may not have been reclaimed by running GC yet)
  • Also, PCM will not account for any overhead or data locked in snapshots created for the MTree or pathsets being measured
  • As PCM jobs just iterate over the list of live files, most FS overhead / metadata (index files, partially filled containers, DM file / directory structures, etc.) will not be accounted for

For example, I have an mtree called /data/col1/jf1 in which I create three 1Gb files, i.e.:

!!!! dd9500 YOUR DATA IS IN DANGER !!!! # for i in 1 2 3 ; do
> dd if=/dev/urandom of=/data/col1/jf1/${i} bs=1024k count=1024
> done


I then copy those files to a second mtree (/data/col1/jf2):

!!!! dd9500 YOUR DATA IS IN DANGER !!!! # cp /data/col1/jf1/1 /data/col1/jf2/4
!!!! dd9500 YOUR DATA IS IN DANGER !!!! # cp /data/col1/jf1/2 /data/col1/jf2/5
!!!! dd9500 YOUR DATA IS IN DANGER !!!! # cp /data/col1/jf1/3 /data/col1/jf2/6


Finally I create a new 1Gb file in /data/col1/jf2:

!!!! dd9500 YOUR DATA IS IN DANGER !!!! # dd if=/dev/urandom of=/data/col1/jf2/7 bs=1024k count=1024
1024+0 records in
1024+0 records out


If local compression of data is disregarded and only de-duplication is considered then it is clear that each mtree used the following amount of physical space when the files were written:

/data/col1/jf1: 3Gb
/data/col1/jf2: 1Gb (for the new file - copied files would have de-duplicated against existing data so would have consumed minimal physical space)

As a result the sum of physical space utilisation by /data/col1/jf1 and /data/col1/jf2 should be around 4Gb.

Three PCM pathsets are created:

jf1 containing /data/col1/jf1
jf2 containing /data/col1/jf2
jfall containing data/col1/jf1 and /data/col1/jf2

The PCM jobs are run and provide output as follows:

Pathset: jf1
Measurement Time      Logical Used   Physical Used   Global-Comp   Local-Comp       Total-Comp
                        (Pre-Comp)     (Post-Comp)        Factor       Factor           Factor
                             (GiB)           (GiB)                               (Reduction %)
-------------------   ------------   -------------   -----------   ----------   --------------
2015/08/23 12:24:09            3.0             3.2         0.96x        0.98x   0.94x (-6.21%)
-------------------   ------------   -------------   -----------   ----------   --------------

Pathset: jf2
Measurement Time      Logical Used   Physical Used   Global-Comp   Local-Comp       Total-Comp
                        (Pre-Comp)     (Post-Comp)        Factor       Factor           Factor
                             (GiB)           (GiB)                               (Reduction %)
-------------------   ------------   -------------   -----------   ----------   --------------
2015/08/23 12:24:12            4.0             4.2         0.98x        0.98x   0.96x (-4.14%)
-------------------   ------------   -------------   -----------   ----------   --------------


These values are correct as each PCM job is only looking at physical data referenced by the files in its corresponding pathset. This means that data for files which were copied is counted twice (once by each PCM job).

At this point it may seem reasonable that to get total physical utilisation by the /data/col1/jf1 and /data/col1/jf2 mtrees we can simply sum the values of 'physical used' from the above outputs. Note, however, that this gives 7.4Gb which is clearly not correct (above it was estimated that due to de-duplication total utilisation would be around 4Gb).

To get an accurate value for total physical utilisation of /data/col1/jf1 and /data/col1/jf2 it is necessary to run a single PCM job covering both of these mtrees (i.e. use jfall). This will ensure that duplicate segments are only counted once and not twice as in the example above, i.e.:

Pathset: jfall
Measurement Time      Logical Used   Physical Used   Global-Comp   Local-Comp       Total-Comp
                        (Pre-Comp)     (Post-Comp)        Factor       Factor           Factor
                             (GiB)           (GiB)                               (Reduction %)
-------------------   ------------   -------------   -----------   ----------   --------------
2015/08/23 12:23:06            7.0             4.2         1.70x        0.98x   1.67x (40.24%)
-------------------   ------------   -------------   -----------   ----------   --------------


In summary the output of multiple PCM jobs cannot be summed to give accurate physical utilisation for a set of pathsets/mtrees. Instead a single PCM pathset should be defined covering all required mtrees/directories as this will ensure duplicate data is only counted once. If this is not done and a separate PCM job were to be run for each mtree on a system, for example, then results summed it is entirely possible that total physical used capacity will exceed the raw capacity of the system.

Can PCM jobs only be submitted via the DDSH command line or can a graphical user interface also be used?

In this document the Data Domain command line interface (DDSH) is used to configure, submit, and review PCM jobs. Note, however, that PCM can also be used via the Data Domain Enterprise Manager/System manager graphical user interfaces.

Note that the DDSH interface has a limit of a maximum of 256 characters in any given command. As a result if PCM jobs need to be configured against a path with very long directory name it may be advantageous (or even required) to use one of the available graphical user interfaces.
Notes

Issue


Version 5.7 of the Data Domain Operating System (DDOS) introduces new functionality known as physical capacity management (PCM) or physical capacity reporting (PCR).

This article describes common use cases and questions around this feature. Note that PCM and PCR are used interchangeably in this document.
Cause
Resolution

What is Physical Capacity Measurement (PCM)?

PCM is a new feature supported in DDOS 5.7 and later which allows calculation of accurate physical disk utilisation by a directory tree, collection of directory trees, mtree, or a collection of mtrees.

How does this differ from features in previous releases of DDOS?

When a file is ingested on a DDR we record various statistics about the file. One such statistic is 'post-lc bytes' or the physical amount of space taken by a file when written to the system. We can view post-lc bytes for a file or directory tree using the 'filesys show compression' command - for example:

sysadmin@dd9500# filesys show compression /data/col1/jf1
Total files: 4;  bytes/storage_used: 1.3
       Original Bytes:        4,309,378,324
  Globally Compressed:        3,242,487,836
   Locally Compressed:        3,293,594,658
            Meta-data:           13,897,112


This indicates that the above directory tree contains 4 files which, in total, used 3,293,594,658 bytes (3.07Gb) of physical space when ingested.

Note, however, that these statistics are generated at the time of ingest and are not updated after this time. Due to the nature of de-duplication, however, as additional files are ingested/deleted and cleaning run, the way in which data on disk is de-duplicated against and as such the way each file de-duplicates (and the amount of data is 'owns') changes. Due to this the above statistics become stale over time and in some cases/workloads can become extremely inaccurate.

PCM is an effort to avoid inconsistent results caused by the above statistics becoming stale. As PCM is able to generate reports of physical disk utilisation at a specific point in time the above limitations no longer apply and results are guaranteed to be significantly more accurate.

Are any additional licenses required for PCM?

No - PCM is not a licensed feature and as a result no additional licenses are required to use PCM.

Is PCM support in all platforms?

No - PCM is supported on all Hardware and Virtual DataDomain appliances(DDVE), except on ATOS (Active Tier on Object Storage) DDVEs. 

Are there any other pre-requisites required before PCM can be used?

By default PCM is disabled in DDOS 5.7. Before it can be used it must be enabled and its cache initialised as shown below:

sysadmin@dd9500# compression physical-capacity-measurement enable and-initialize
physical-capacity-measurement enabled. Initialization started.


Note that the PCM cache is used to speed future PCM jobs and initialisation of the cache can take considerable time. Despite this PCM jobs can start to be queued whilst the PCM cache is being initialised.

How does PCM calculate usage totals?

PCM utilises mtree snapshots to determine physical utilisation for a group of files. As a result, when a PCM job starts the following will happen:

- An mtree snapshot is created against underlying mtrees. Note that this snapshot will be named pcr_snap_*, i.e.:

sysadmin@dd9500# snapshot list mtree /data/col1/jf2
Snapshot Information for MTree: /data/col1/jf2
----------------------------------------------
Name                                Pre-Comp (GiB)   Create Date         Retain Until        Status
---------------------------------   --------------   -----------------   -----------------   -------
pcr_snap_1440284055_1440360259_19              6.0   Aug 23 2015 13:04   Dec 31 1969 16:00   expired
---------------------------------   --------------   -----------------   -----------------   -------


- PCM finds files from the snapshot which are to be included in the PCM job (i.e. are in the pathsets/mtrees specified)
- PCM will walk the segment tree of these files to essentially build a list of unique segment fingerprints referenced by all of the files
- PCM will then find corresponding segments on disk (within the container set) and calculate the sum of the size of those segments
- Note that the sum of the size of these segments represents the current physical disk utilisation by the corresponding files
- In addition to the above the pre-compressed size of the set of files can be found from corresponding file metadata
- Once PCM jobs complete underlying PCM snapshots are expired for later removal

How do PCM jobs work?

PCM jobs are submitted by a user (or via a schedule) and are added to a PCM work queue. Depending on system workload PCM jobs may then be picked from the queue and started immediately or may be deferred for a period of time.

Examples of why PCM jobs may be deferred are as follows:

- Active tier clean is running on the system - PCM jobs and active tier clean cannot run in parallel. As a result PCM jobs queued whilst active tier clean is running will be deferred until active tier clean completes

- There are already a number of PCM jobs running against underlying mtrees - PCM utilises mtree snapshots and there are strict limits on how many PCM snapshots a given user can create at a given time against a single mtree. If these limits are to be exceeded by a new PCM job the job will be deferred until existing jobs complete

Is it possible to control the resources used by PCM on a system?

PCM uses a throttling mechanism which is similar to that used by active tier clean, i.e. the PCM throttle can be set from 0 (not aggressive) to 100 (very aggressive). Obviously the higher the throttle the more resources will be used by PCM and the larger impact PCM jobs may have on other workload on the system.

By default the PCM throttle is set to 20, i.e.:

sysadmin@dd9500# compression physical-capacity-measurement throttle show
Throttle is set to 20 percent (default).


PCM throttle can be modified as follows with the change to throttle taking place immediately (i.e. no DDFS restart is required for PCM to pick up the new throttle setting):

sysadmin@dd9500# compression physical-capacity-measurement throttle set 50
Throttle set to 50 percent.


What are pathsets?

PCM jobs can be run in two ways, i.e.:

- Against a pre-defined 'pathset' (i.e. user specified collection of directories)
- Against a single mtree

Before jobs can be run against a given pathset the pathset must be created/defined as follows:

sysadmin@dd9500# compression physical-capacity-measurement pathset create jfall paths /data/col1/jf1,/data/col1/jf2
Pathset "jfall" created.


Note that specific directories can be added to or removed from an existing pathset as follows:

sysadmin@dd9500# compression physical-capacity-measurement pathset del jfall paths /data/col1/jf2
Path(s) deleted from pathset "jfall".
sysadmin@dd9500# compression physical-capacity-measurement pathset add jfall paths /data/col1/jf2
Path(s) added to pathset "jfall".


All pathsets which have been created can be displayed as follows:

sysadmin@dd9500# compression physical-capacity-measurement pathset show list
Pathset           Number of paths   Measurement-retention (days)
---------------   ---------------   ----------------------------
jf1                             1                            180
jf2                             1                            180
jfall                           2                            180
phys-gandhi3                    1                            180
phys-gandhi5-fc                 1                            180
phys-gandhi5                    1                            180
phys2-gandhi3                   2                            180
---------------   ---------------   ----------------------------
7 pathset(s) found.


To view specific paths defined within a pathset the 'pathset show detailed' command can be used:

sysadmin@dd9500# compression physical-capacity-measurement pathset show detailed jfall
Pathset: jfall
    Number of paths: 2
    Measurement-retention: 180 day(s)
    Paths:
        /data/col1/jf1
        /data/col1/jf2
sysadmin@dd9500#


To delete a pathset the 'pathset destroy' command can be used:

sysadmin@dd9500# compression physical-capacity-measurement pathset destroy jfall

Note, however, that this will remove all history for the given pathset.

Note that ad-hoc jobs against a single mtree do not need to have a pathset defined before being run.

How is a PCM job started?

A new PCM job can be submitted to the PCM work queue by using the 'sample start' command, i.e.:

sysadmin@dd9500# compression physical-capacity-measurement sample start pathsets jfall
Measurement task(s) submitted and will begin as soon as resources are available.


In the above example a pre-defined pathset was used. To submit a PCM job against a single mtree the mtree can simply be specified, i.e.:

sysadmin@dd9500# compression physical-capacity-measurement sample start mtrees /data/col1/backup
Measurement task(s) submitted and will begin as soon as resources are available.


By default PCM jobs are submitted with a priority of 'normal'. It is also possible, however, to specify a priority of urgent:

sysadmin@dd9500# compression physical-capacity-measurement sample start pathsets jf1 priority urgent
Measurement task(s) submitted and will begin as soon as resources are available.


Jobs with priority of 'urgent' will be queued ahead of those with priority of 'normal' (meaning they will be picked up and worked in preference to any submitted jobs of priority 'normal').

A list of currently submitted/running jobs can be displayed using the 'sample show current' command, for example:

sysadmin@dd9500# compression physical-capacity-measurement sample show current
Task ID       Type   Name    User       State       Creation Time         Measurement Time      Start Time   Priority   Percent
                                                                          (Submitted Time)                              Done
-----------   ----   -----   --------   ---------   -------------------   -------------------   ----------   --------   --------
47244640259   PS     jf2     sysadmin   Scheduled   2015/08/23 12:24:12   2015/08/23 12:24:12   --           Urgent     0
47244640258   PS     jf1     sysadmin   Scheduled   2015/08/23 12:24:09   2015/08/23 12:24:09   --           Urgent     0
47244640257   PS     jfall   sysadmin   Scheduled   2015/08/23 12:23:06   2015/08/23 12:23:06   --           Normal     0
-----------   ----   -----   --------   ---------   -------------------   -------------------   ----------   --------   --------
sysadmin@dd9500#


Can PCM jobs be scheduled?

Yes - if a specific PCM job needs to be run regularly it can be scheduled to run automatically as required. For example:

sysadmin@dd9500# compression physical-capacity-measurement schedule create jf_sched pathsets jfall,jf1,jf2 time 1400
Schedule "jf_sched" created.


Note that schedules can be created to run daily, on specific days of the week, or certain days of each month.

An existing schedule can be modified using the 'schedule modify' command:

sysadmin@dd9500# compression physical-capacity-measurement schedule modify jf_sched priority urgent time 1700 day Wed,Fri
Schedule "jf_sched" modified.


In addition an existing schedule can have pathsets added/removed as follows:

sysadmin@dd9500# compression physical-capacity-measurement schedule del jf_sched pathsets jf2
Schedule "jf_sched" modified.
sysadmin@dd9500# compression physical-capacity-measurement schedule add jf_sched pathsets jf2
Schedule "jf_sched" modified.


Note that a schedule cannot only contain pathsets OR mtrees (i.e. the two cannot be mixed):

sysadmin@dd9500# compression physical-capacity-measurement schedule create jf_sched2 mtrees /data/col1/backup time 1400
Schedule "jf_sched2" created.
sysadmin@dd9500# compression physical-capacity-measurement schedule add jf_sched2 pathsets jfall
**** Failed to add: this schedule is only for mtrees.


To view details of existing schedules the 'schedule show all' command can be used, for example:

sysadmin@dd9500# compression physical-capacity-measurement schedule show all
Name:      jf_sched
Status:    enabled
Priority:  urgent
Frequency: weekly on Wed, Fri
Time:      17:00
Pathset(s):
    jfall
    jf1
    jf2

Name:      jf_sched2
Status:    enabled
Priority:  normal
Frequency: daily
Time:      14:00
MTree(s):
    /data/col1/backup


Existing schedules can be disabled or enabled on the fly, i.e.:

sysadmin@dd9500# compression physical-capacity-measurement schedule disable jf_sched2
Schedule "jf_sched2" disabled.
sysadmin@dd9500# compression physical-capacity-measurement schedule enable jf_sched2
Schedule "jf_sched2" enabled.


A schedule can also be destroyed:

sysadmin@dd9500# compression physical-capacity-measurement schedule destroy jf_sched2
Schedule "jf_sched2" destroyed.


Note that this will NOT remove history for the corresponding mtrees/pathsets (it just means that new PCM jobs will not be automatically scheduled)

How are scheduled jobs started?

When a PCM schedule is added and enabled this will cause a corresponding entry to be added to /etc/crontab, i.e.:

#
# collection.1.crontab.pcr.jf_sched.0
#
00 17 * * Wed,Fri  root /ddr/bin/ddsh -a compression physical-capacity-measurement sample start force priority urgent objects-from-schedule jf_sched


Note that the cron job will be removed from /etc/crontab if the schedule is disabled or destroyed

Can I abort a running PCM job?

Yes - running PCM jobs can be aborted using either the task id or pathset/mtree names. For example we see that we have two PCM jobs queued:

SE@dd9500## compression physical-capacity-measurement sample show current
Task ID        Type   Name    User       State       Creation Time         Measurement Time      Start Time   Priority   Percent
                                                                           (Submitted Time)                              Done
------------   ----   -----   --------   ---------   -------------------   -------------------   ----------   --------   --------
124554051585   PS     jfall   sysadmin   Scheduled   2015/08/30 16:00:48   2015/08/30 16:00:48   --           Normal     0
124554051586   PS     jfall   sysadmin   Scheduled   2015/08/30 16:01:55   2015/08/30 16:01:55   --           Normal     0
------------   ----   -----   --------   ---------   -------------------   -------------------   ----------   --------   --------


These jobs can be aborted using either the task-id (to abort a single job):

SE@dd9500## compression physical-capacity-measurement sample stop task-id 124554051585
**   This will abort any submitted or running compression physical-capacity-measurement sampling tasks.
        Do you want to proceed? (yes|no) [no]: yes
1 task(s) aborted.


Leaving us with a single running job:

SE@dd9500## compression physical-capacity-measurement sample show current
Task ID        Type   Name    User       State       Creation Time         Measurement Time      Start Time   Priority   Percent
                                                                           (Submitted Time)                              Done
------------   ----   -----   --------   ---------   -------------------   -------------------   ----------   --------   --------
124554051586   PS     jfall   sysadmin   Scheduled   2015/08/30 16:01:55   2015/08/30 16:01:55   --           Normal     0
------------   ----   -----   --------   ---------   -------------------   -------------------   ----------   --------   --------


Or pathset name:

SE@dd9500## compression physical-capacity-measurement sample stop pathsets jfall
**   This will abort any submitted or running compression physical-capacity-measurement sampling tasks.
        Do you want to proceed? (yes|no) [no]: yes
1 task(s) aborted.


Leaving us with no jobs:

SE@dd9500## compression physical-capacity-measurement sample show current
No measurement tasks found.


How can details of completed jobs be displayed?

Details of completed jobs can be viewed with the 'sample show history' command. For example to show details for a single pathset:

SE@dd9500## compression physical-capacity-measurement sample show history pathset jfall
Pathset: jfall
Measurement Time      Logical Used   Physical Used   Global-Comp   Local-Comp       Total-Comp
                        (Pre-Comp)     (Post-Comp)        Factor       Factor           Factor
                             (GiB)           (GiB)                               (Reduction %)
-------------------   ------------   -------------   -----------   ----------   --------------
2015/08/23 12:23:06            7.0             4.2         1.70x        0.98x   1.67x (40.24%)
2015/08/23 13:04:20           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
2015/08/26 14:00:01           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
2015/08/27 14:00:01           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
2015/08/28 14:00:02           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
2015/08/29 14:00:02           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
2015/08/30 14:00:01           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
-------------------   ------------   -------------   -----------   ----------   --------------
Total number of measurements retrieved = 7.


The detailed-history parameter also shows start/end times of each job:

SE@dd9500## compression physical-capacity-measurement sample show detailed-history pathset jfall
Pathset: jfall
Measurement Time      Logical Used   Physical Used   Global-Comp   Local-Comp       Total-Comp   Task ID        Task Start Time       Task End Time
                        (Pre-Comp)     (Post-Comp)        Factor       Factor           Factor
                             (GiB)           (GiB)                               (Reduction %)
-------------------   ------------   -------------   -----------   ----------   --------------   ------------   -------------------   -------------------
2015/08/23 12:23:06            7.0             4.2         1.70x        0.98x   1.67x (40.24%)   47244640257    2015/08/23 12:25:19   2015/08/23 12:25:23
2015/08/23 13:04:20           10.0             6.2         1.63x        0.98x   1.61x (37.84%)   51539607553    2015/08/23 13:05:45   2015/08/23 13:05:48
2015/08/26 14:00:01           10.0             6.2         1.63x        0.98x   1.61x (37.84%)   77309411329    2015/08/26 14:02:50   2015/08/26 14:02:50
2015/08/27 14:00:01           10.0             6.2         1.63x        0.98x   1.61x (37.84%)   85899345921    2015/08/27 14:03:06   2015/08/27 14:03:06
2015/08/28 14:00:02           10.0             6.2         1.63x        0.98x   1.61x (37.84%)   94489280513    2015/08/28 14:02:50   2015/08/28 14:02:51
2015/08/29 14:00:02           10.0             6.2         1.63x        0.98x   1.61x (37.84%)   103079215105   2015/08/29 14:01:40   2015/08/29 14:01:41
2015/08/30 14:00:01           10.0             6.2         1.63x        0.98x   1.61x (37.84%)   115964116993   2015/08/30 14:04:12   2015/08/30 14:04:12
-------------------   ------------   -------------   -----------   ----------   --------------   ------------   -------------------   -------------------
Total number of measurements retrieved = 7.


Note that either command can modified to only retrieve results over a specific time period:

SE@dd9500## compression physical-capacity-measurement sample show history pathset jfall last 2days
Pathset: jfall
Measurement Time      Logical Used   Physical Used   Global-Comp   Local-Comp       Total-Comp
                        (Pre-Comp)     (Post-Comp)        Factor       Factor           Factor
                             (GiB)           (GiB)                               (Reduction %)
-------------------   ------------   -------------   -----------   ----------   --------------
2015/08/29 14:00:02           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
2015/08/30 14:00:01           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
-------------------   ------------   -------------   -----------   ----------   --------------
Total number of measurements retrieved = 2.


Or between specific dates/times:

SE@dd9500## compression physical-capacity-measurement sample show history pathset jfall start 08231010 end 08231400
Pathset: jfall
Measurement Time      Logical Used   Physical Used   Global-Comp   Local-Comp       Total-Comp
                        (Pre-Comp)     (Post-Comp)        Factor       Factor           Factor
                             (GiB)           (GiB)                               (Reduction %)
-------------------   ------------   -------------   -----------   ----------   --------------
2015/08/23 12:23:06            7.0             4.2         1.70x        0.98x   1.67x (40.24%)
2015/08/23 13:04:20           10.0             6.2         1.63x        0.98x   1.61x (37.84%)
-------------------   ------------   -------------   -----------   ----------   --------------
Total number of measurements retrieved = 2.


How long is a specific history kept for a specific pathset/mtree?

By default the results of each PCM pathset is kept for 180 days. This can be changed, however, by modification of the corresponding pathset:

SE@dd9500## compression physical-capacity-measurement pathset modify jfall measurement-retention 14
Measurement-retention changed to 14 day(s).


PCM history is held in the systems historical database. As a result if the historical database is lost/damaged details of all PCM history will also be lost.

Are there any caveats to be aware of when using PCM?

PCM jobs will be suspended if system has less than 10% available space. 
PCM jobs will be suspended while the cleaning cycle is running. 

As previously stated, PCM is a tool used to calculate physical utilisation by a specific set of directories/mtrees. When calculating results for a specific pathset or set of mtrees PCM will only count the size of each unique segment of data used by the pathset or set of mtrees once. Note, however, that due to the nature of de-duplication there may be other files outside of the pathsets/mtrees against which the job is being run which may also de-duplicate against the same data.

As a result of this if files de-duplicating against the same data are included in multiple different PCM jobs each of the segments making up the files may be counted multiple times (once by each PCM job). This means that whilst the results of each individual PCM job are accurate, the results of multiple PCM jobs cannot be summed to give accurate physical utilisation for the sum of pathsets/mtrees in the jobs.

Due to the way PCM jobs put together the pathset or MTree post-comp space used on disk:
  • PCM jobs don't account for dead data (disk space used by deleted files, which space on disk may not have been reclaimed by running GC yet)
  • Also, PCM will not account for any overhead or data locked in snapshots created for the MTree or pathsets being measured
  • As PCM jobs just iterate over the list of live files, most FS overhead / metadata (index files, partially filled containers, DM file / directory structures, etc.) will not be accounted for

For example, I have an mtree called /data/col1/jf1 in which I create three 1Gb files, i.e.:

!!!! dd9500 YOUR DATA IS IN DANGER !!!! # for i in 1 2 3 ; do
> dd if=/dev/urandom of=/data/col1/jf1/${i} bs=1024k count=1024
> done


I then copy those files to a second mtree (/data/col1/jf2):

!!!! dd9500 YOUR DATA IS IN DANGER !!!! # cp /data/col1/jf1/1 /data/col1/jf2/4
!!!! dd9500 YOUR DATA IS IN DANGER !!!! # cp /data/col1/jf1/2 /data/col1/jf2/5
!!!! dd9500 YOUR DATA IS IN DANGER !!!! # cp /data/col1/jf1/3 /data/col1/jf2/6


Finally I create a new 1Gb file in /data/col1/jf2:

!!!! dd9500 YOUR DATA IS IN DANGER !!!! # dd if=/dev/urandom of=/data/col1/jf2/7 bs=1024k count=1024
1024+0 records in
1024+0 records out


If local compression of data is disregarded and only de-duplication is considered then it is clear that each mtree used the following amount of physical space when the files were written:

/data/col1/jf1: 3Gb
/data/col1/jf2: 1Gb (for the new file - copied files would have de-duplicated against existing data so would have consumed minimal physical space)

As a result the sum of physical space utilisation by /data/col1/jf1 and /data/col1/jf2 should be around 4Gb.

Three PCM pathsets are created:

jf1 containing /data/col1/jf1
jf2 containing /data/col1/jf2
jfall containing data/col1/jf1 and /data/col1/jf2

The PCM jobs are run and provide output as follows:

Pathset: jf1
Measurement Time      Logical Used   Physical Used   Global-Comp   Local-Comp       Total-Comp
                        (Pre-Comp)     (Post-Comp)        Factor       Factor           Factor
                             (GiB)           (GiB)                               (Reduction %)
-------------------   ------------   -------------   -----------   ----------   --------------
2015/08/23 12:24:09            3.0             3.2         0.96x        0.98x   0.94x (-6.21%)
-------------------   ------------   -------------   -----------   ----------   --------------

Pathset: jf2
Measurement Time      Logical Used   Physical Used   Global-Comp   Local-Comp       Total-Comp
                        (Pre-Comp)     (Post-Comp)        Factor       Factor           Factor
                             (GiB)           (GiB)                               (Reduction %)
-------------------   ------------   -------------   -----------   ----------   --------------
2015/08/23 12:24:12            4.0             4.2         0.98x        0.98x   0.96x (-4.14%)
-------------------   ------------   -------------   -----------   ----------   --------------


These values are correct as each PCM job is only looking at physical data referenced by the files in its corresponding pathset. This means that data for files which were copied is counted twice (once by each PCM job).

At this point it may seem reasonable that to get total physical utilisation by the /data/col1/jf1 and /data/col1/jf2 mtrees we can simply sum the values of 'physical used' from the above outputs. Note, however, that this gives 7.4Gb which is clearly not correct (above it was estimated that due to de-duplication total utilisation would be around 4Gb).

To get an accurate value for total physical utilisation of /data/col1/jf1 and /data/col1/jf2 it is necessary to run a single PCM job covering both of these mtrees (i.e. use jfall). This will ensure that duplicate segments are only counted once and not twice as in the example above, i.e.:

Pathset: jfall
Measurement Time      Logical Used   Physical Used   Global-Comp   Local-Comp       Total-Comp
                        (Pre-Comp)     (Post-Comp)        Factor       Factor           Factor
                             (GiB)           (GiB)                               (Reduction %)
-------------------   ------------   -------------   -----------   ----------   --------------
2015/08/23 12:23:06            7.0             4.2         1.70x        0.98x   1.67x (40.24%)
-------------------   ------------   -------------   -----------   ----------   --------------


In summary the output of multiple PCM jobs cannot be summed to give accurate physical utilisation for a set of pathsets/mtrees. Instead a single PCM pathset should be defined covering all required mtrees/directories as this will ensure duplicate data is only counted once. If this is not done and a separate PCM job were to be run for each mtree on a system, for example, then results summed it is entirely possible that total physical used capacity will exceed the raw capacity of the system.

Can PCM jobs only be submitted via the DDSH command line or can a graphical user interface also be used?

In this document the Data Domain command line interface (DDSH) is used to configure, submit, and review PCM jobs. Note, however, that PCM can also be used via the Data Domain Enterprise Manager/System manager graphical user interfaces.

Note that the DDSH interface has a limit of a maximum of 256 characters in any given command. As a result if PCM jobs need to be configured against a path with very long directory name it may be advantageous (or even required) to use one of the available graphical user interfaces.

Notes

Article Attachments

Attachments

Attachments

Article Properties

First Published

Fri Feb 05 2016 18:54:38 GMT

First Published

Fri Feb 05 2016 18:54:38 GMT

Rate this article

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters