Data Domain: Understanding System Show Performance output

Summary: Understanding system show performance view legacy (SSP) in DDOS

This article applies to This article does not apply to This article is not tied to any specific product. Not all product versions are identified in this article.

Instructions

This article provides guidance on the system show performance view legacy command (also referred to as SSP) in DDOS systems. SSP processes performance logs and presents key system performance metrics over a specified time range and interval. This article focuses only on the legacy view output and does not cover the non‑legacy system show performance format.

The SSP commands are available in the regular DDOS CLI. It processes the performance log and gives a good overview of many DD system performance metrics for a period of time at the chosen sampling intervals.

The command syntax is:

  • system show performance view legacy [ duration <duration> {hr | min} [interval <interval> {hr | min}] ]
    • Both the "duration" and the "interval" modifiers are optional.
      • If none is provided, stats are printed for the last 24 hours at 10 minute intervals. This is the information printed in the daily ASUPs under the section.
      • If omitted, the system displays the last 24 hours at 10‑minute intervals, matching the data in ASUPs ("SYSTEM SHOW PERFORMANCE LEGACY VIEW" section).
      • The most granular supported interval is 1 minute.

When passing these options, the command line would be as follows (the command below returns the same as the one with no options):

  • system show performance view legacy duration 24 hr interval 10 min
    • This shows the system performance for the last 24 hours at 10 minute intervals. The most granular interval is set to 1 minute.

The sample command output for any supported DDOS (7.x+) is as below:

  -----------Throughput (MB/s)----------- ---------------Protocol-----------------  Compression  ------Cache Miss--------  -----------Streams----------- -MTree Active-  -State-  -----Utilization-----  --Latency--  --------------------SS LOAD BALANCE(user/repl)---------------------   -----------------SS LOAD BALANCE(gc)--------------------
                                                                                                                                                                     Repl
Date       Time      Read  Write Repl Network  Repl Pre-comp ops/s  load    data(MB/s)    wait(ms/MB)  gcomp lcomp  thra unus ovhd data meta    rd/  wr/  r+/  w+/  in/ out      rd/wr      'CDPVMSFIRL'     CPU        disk       in ms          stream    prefetch        rd         wr            tot                   prefetch        rd         wr         tot
---------- --------  ----- ----- ----in/out--- ----in/out--- -----  --%--   --in/out---   --in/out---  ----- -----  ---- ---- ---- ---- ----  ----------------------------- --------------  ---------  -avg/max---- --max---  --avg/sdev-  ----------------------------avg/sdev-------------------------------   ----------------------------avg/sdev--------------------
2026/02/03 09:37:00  175.8 444.0   7.45/734.06   0.00/5479.44   698 3936.09%  38.35/167.96   0.87/  0.37   11.0   1.2    0%   4%  20%   0%   0%    12/  14/   0/   0/   0/  32       1/ 3       C--VM--I---   46%/ 73%[16]  48%[ 2]    0.6/  3.5  246.0/ 23.0 107.6K/  3.7K 172.5K/  8.3K   2.3K/  1.1K 282.4K/ 11.0K    0.0B/  0.0B 434.0M/434.0M   0.0B/  0.0B 434.0M/434.0M
2026/02/03 09:47:00  185.9 578.9   4.91/529.39   0.00/4342.92   742 3943.92%  42.16/177.61   0.88/  0.36   12.4   1.2    0%   5%  22%   0%   0%    12/  14/   0/   0/   0/  32       1/ 2       C---M--I---   42%/ 66%[61]  45%[ 6]    0.5/  3.1  244.5/ 23.5  77.2K/  7.3K 125.9K/ 10.1K   2.5K/388.0B 205.6K/ 17.7K    0.0B/  0.0B 479.4M/479.4M   0.0B/  0.0B 479.4M/479.4M
2026/02/03 09:57:00  130.4 384.7   4.52/453.07   0.00/3358.90   521 3965.19%  12.77/124.54   1.15/  0.34   29.9   1.4    0%   4%  20%   0%   0%    12/  11/   0/   0/   0/  32       1/ 3       C--VM--I---   38%/ 63%[61]  42%[ 8]    0.6/  3.3  232.0/ 33.0  68.9K/  4.7K 117.6K/ 13.8K 729.0B/ 17.0B 187.2K/ 18.5K    0.0B/  0.0B 497.7M/497.7M   0.0B/  0.0B 497.7M/497.7M
2026/02/03 10:07:00  225.1 543.9   4.56/535.18   0.00/4603.02  1035 3976.73%  15.48/214.93   1.34/  0.35   24.3   1.5    1%   5%  21%   0%   0%    12/  10/   0/   0/   0/  32       1/ 1       C--VM--I---   42%/ 65%[63]  44%[ 5]    0.9/  4.3  243.0/ 10.0  73.7K/  9.4K 122.6K/ 12.8K 999.0B/336.0B 197.3K/ 22.6K    0.0B/  0.0B 494.2M/494.2M   0.0B/  0.0B 494.2M/494.2M
2026/02/03 10:17:00  611.0 656.2   5.10/333.18   0.00/3289.09  2422 4020.14% 167.55/584.37   0.63/  0.38    2.2   2.2    1%   5%  21%   0%   0%    12/  16/   0/   0/   0/  32       1/ 3       C---M--I---   49%/ 69%[61]  47%[ 7]    0.7/  0.9  248.5/  4.5  80.2K/  1.1K 146.0K/  2.5K  16.9K/  2.1K 243.1K/790.0B    0.0B/  0.0B 481.7M/481.7M   0.0B/  0.0B 481.7M/481.7M
2026/02/03 10:27:00  488.7 549.5   2.67/206.53   0.00/572.18   1901 4043.06% 141.03/467.35   0.58/  0.36    2.6   1.9    0%   0%  15%   0%   0%    12/  10/   0/   0/   0/  32       1/ 1       C--VM--I---   41%/ 71%[61]  40%[12]    0.6/  1.4  245.0/ 11.0  61.2K/  7.5K 119.5K/ 10.0K  12.3K/  3.9K 193.0K/ 21.4K    0.0B/  0.0B 505.2M/505.2M   0.0B/  0.0B 505.2M/505.2M
2026/02/03 10:37:00  555.4 329.6   2.05/164.31   0.00/468.37   2140 4049.55%   7.98/530.11   2.56/  0.29   44.4   1.6    0%   0%  16%   0%   0%    12/  12/   0/   0/   0/   0       1/ 1       C--VM--I---   34%/ 65%[7]   33%[ 2]    0.4/  2.3  198.0/  7.0  58.7K/  2.6K 101.9K/492.0B 413.0B/ 48.0B 161.0K/  2.0K    0.0B/  0.0B 516.6M/516.6M   0.0B/  0.0B 516.6M/516.6M
2026/02/03 10:47:00  377.3 296.0   0.00/  0.20   0.00/  0.25   1547 4060.35%   7.37/360.23   2.15/  0.28   42.7   1.5    0%   0%   0%   0%   1%    11/  11/   0/   0/   0/   0       1/ 2       C---M--I---   26%/ 59%[16]  40%[ 2]    0.9/  3.0  192.5/ 12.5  21.6K/  7.3K  88.0K/  8.1K 391.0B/  9.0B 109.9K/ 15.3K    0.0B/  0.0B 388.8M/388.8M   0.0B/  0.0B 388.8M/388.8M
2026/02/03 10:57:00  587.5 268.6   0.00/  0.00   0.00/  0.00   2543 4169.14%   5.92/560.99   3.35/  0.26   50.6   1.9    0%   0%  15%   0%   1%     8/  10/   0/   0/   0/   0       1/ 1       C--VM--I---   19%/ 44%[16]  77%[ 2]    0.8/  1.4  192.5/ 14.5  33.6K/  4.4K  78.5K/  6.7K 298.0B/  4.0B 112.4K/ 11.1K    0.0B/  0.0B   0.0B/  0.0B   0.0B/  0.0B   0.0B/  0.0B
2026/02/03 11:07:00  577.8 399.1   0.00/  0.00   0.00/  0.00   2707 4325.07%  18.08/551.77   1.53/  0.28   14.1   1.4    0%   0%  17%   0%   1%     6/   1/   0/   0/   0/   0       1/ 1       C--VM--I---   19%/ 52%[12]  77%[10]    0.9/  2.0  173.5/  6.5  32.3K/ 12.7K  63.7K/ 12.2K   1.1K/504.0B  97.0K/ 24.4K    0.0B/  0.0B   0.0B/  0.0B   0.0B/  0.0B   0.0B/  0.0B
2026/02/03 11:17:00  472.8 639.6   0.39/ 20.35   0.00/667.53   2202 4504.55%   5.88/451.60   2.69/  0.28    1.0   1.8    0%   3%  18%   0%   1%     6/   0/   0/   0/   0/   0       1/ 0       C---M--I---   20%/ 40%[61]  75%[ 2]    0.6/  1.4  178.5/  2.5  31.8K/  1.1K  47.5K/  2.5K 357.0B/196.0B  79.6K/  3.4K    0.0B/  0.0B   0.0B/  0.0B   0.0B/  0.0B   0.0B/  0.0B
2026/02/03 11:27:00  382.2 189.9   0.00/  0.00   0.00/  0.00   1869 4685.39%  61.04/365.38   0.77/  0.30    3.0   1.1    0%   2%  18%   0%   1%     6/   2/   0/   0/   0/   0       1/ 1       C--VM--I---   20%/ 54%[16]  75%[ 2]    0.4/  1.3  180.0/  2.0  21.9K/863.0B  57.3K/  3.3K   3.7K/740.0B  82.9K/  4.9K    0.0B/  0.0B   0.0B/  0.0B   0.0B/  0.0B   0.0B/  0.0B
2026/02/03 11:37:00  368.0 148.0   0.00/  0.00   0.00/  0.00   1786 4847.44%  64.57/351.79   0.82/  0.30    1.4   1.2    1%   1%  17%   0%   0%     2/   0/   0/   0/   0/   0       1/ 0       C--VM--I---   19%/ 61%[16]  76%[ 2]    0.5/  2.3  158.5/  1.5  21.0K/  3.8K  31.8K/  4.1K   4.0K/402.0B  56.8K/  8.3K    0.0B/  0.0B   0.0B/  0.0B   0.0B/  0.0B   0.0B/  0.0B
2026/02/03 11:47:00   42.4  12.2   0.04/  2.04   0.00/ 49.73    165 4946.83%   9.93/ 40.48   0.70/  0.28    1.0   1.4    1%   5%  21%   0%   2%     0/   0/   0/   0/   0/   0       0/ 0       C--VM--I---   16%/ 60%[7]   64%[ 2]    0.4/  7.0  153.5/  2.5   2.9K/360.0B  13.7K/501.0B 731.0B/142.0B  17.3K/718.0B    0.0B/  0.0B   0.0B/  0.0B   0.0B/  0.0B   0.0B/  0.0B
2026/02/03 11:57:00    0.4  49.3   0.00/  0.00   0.00/  0.00     10 4950.36%  11.61/  0.46   0.36/  5.37    1.0   1.3    0%   2%  22%   1%   1%     0/   0/   0/   0/   0/   0       0/ 0       C--VM--I--L   15%/ 78%[5]   74%[ 2]    0.8/  5.8  156.0/  2.0  52.0B/  4.0B  17.3K/  2.3K 747.0B/636.0B  18.1K/  2.9K    0.0B/  0.0B  46.3M/ 46.3M   8.2M/  8.2M  54.4M/ 54.4M

A definition of each output is provided below:

Throughput:

Read read throughput from the DDR (pre-comp)
Write write throughput to the DDR (pre-comp)
Repl Network replication network throughput in to and out of the DDR
Repl Pre-comp replication pre-comp throughput into and out of the DDR (always zero for collection replication)

 

Protocol:

ops/s FS protocol level operations per second
load % Load percentage (pending ops/total RPC ops *100) (NOTE: erroneous, > 100%, in some DDOS versions)
data(MB/s) in/out Protocol throughput: amount of data the filesystem can read from and write to the kernel socket buffer
wait(ms/MB) in/out Time taken to send and receive 1MB of data from the filesystem to kernel socket buffer.

 

Compression:

gcomp global compression factor (deduplication)
lcomp local compression factor 

 

Cache Miss: (only meaningful while reads are ongoing, for all metrics in table below, higher is worse)

thra

Percent of "compression units" that have been read and discarded without being used.

A high percent indicates cache thrashing.

unus Percent of a compression (which contains multiple individual unique segments compressed together) unit's data that is unused.
A high percent indicates poor data locality (unique segments for files being read not being tightly packed together).
ovhd Percent of a compression unit cache block that is unused. Compression regions are stored in fixed size (128KB) blocks.
A high ovhd relative to unus indicates that a lot of space is wasted due to cache block fragmentation.
In the ideal case, ovhd == unus
data Percent of data segment lookups that miss in the cache. A high percent indicates poor data prefetching.
meta Percent of meta data segment lookups that miss in the cache. For each data access, we first perform a metadata lookup followed by a data lookup.
A high percent indicates poor metadata prefetching.

 

Streams: (number of active external streams, streams kept between the DD and the backup applications)

rd active external read streams
wr active external write streams
r+ reopened read file streams in the past 30 seconds, streams are re-opened when exhausting the supported stream allocation
w+ reopened write file streams in the past 30 seconds, streams are re-opened when exhausting the supported stream allocation
Repl In MTree and BOOST replication streams for incoming replication to the DD
Repl Out MTree and BOOST replication streams for outgoing replication from the DD

 

MTree Active: (number of actively used MTrees for reads and writes)

rd Combined number of FS MTrees being used for ongoing file reads (restores)
wr Combined number of FS MTrees being used for ongoing file writes (backups)

 

State: (important background activity):

C Cleaning (either Active or Cloud Tier)
D Disk reconstruction
B MNC Rebalance
V Verification
M Fingerprint Merge
S Summary Vector Checkpoint
F Data Movement to Cloud Unit
I Data Integrity

 

Utilization:

cpu average CPU utilization, and utilization for the busiest CPU (which ID is shown in brackets)
disk disk IO % utilization for the busiest disks. Drive index in brackets cannot be mapped to a given disk.

 

Latency:

avg/std the average and standard deviation of the response time from the "ddfs" process when servicing all protocol requests, excluding the time to receive/send the request/reply, higher values indicates the "ddfs" process may be overwhelmed

 

SS LOAD BALANCE (user/repl): internal Segment Store metrics for user workloads, only meaningful for Data Domain Engineering

stream The number of open streams
prefetch The number/percentage of prefetch requests
rd The number of read requests
wr The number of write requests
tot The total number of requests

 

SS LOAD BALANCE (gc): internal Segment Store metrics for GC (clean), only meaningful for Data Domain Engineering

prefetch Prefetch processes.
rd Read processes.
wr Write processes.
tot The total number of gc processes.

 

Additional Information

 

 

Affected Products

Data Domain
Article Properties
Article Number: 000009792
Article Type: How To
Last Modified: 09 Feb 2026
Version:  7
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.