1 Copper

Optimising read performance - DD6300

We have a couple of new DD6300 configured with around 100TB for backup storage. We are quite new to DataDomains and we are probably doing something wrong with our configuration. The read performance is currently too low to be practical. I'm having difficulty determining what we should be seeing as there doesn't seem to be much documentation on actual performance expectations for each configurable, so i'm asking this forum for help.

At the moment, Configured as a CIFS File share, it has LZ compression enabled, 20% random, 80% sequential and a single 10GBe connection to our network. I have about 16TB of data on the DD, of random types, output from our backup system (Commvault). Normal case, if i copy using Windows Explorer, i will see about 6-12MB/s read speed from the DataDomain share. Best case scenario (i'm not sure the situation that arises that allows this) i'll see 40-50MB/s copying the same files in Windows Explorer from the Data Domain.

I need to configure these devices to achieve around ~100MB/s read performance in normal operation to make restores practical. Any compression is a bonus once we have established an acceptable read speed.

Should i turn off LZ compression? Do i need to install DDBoost somewhere? Is there a way to see what the Data Domain is doing?  I tried system show performance, it shows 11.9 MiB/s with 113 iOPS, 47ms read protocol latency and 4 read streams. Every other column is 0.00. the CPU charts show 0-5% CPU load at all times.

Tags (1)
0 Kudos
2 Replies
2 Iron

Re: Optimising read performance - DD6300

Are you doing LZ before sending to the data domain, if so - stop it.  Any compression on the client files before sending to the DD is counter productive and reduces the backup and recovery speed, and it also reduces deduplication to zero. For DD6300 performance on windows, single stream performance is rated around 300 MB/s per stream, and you should be able to hit around 1200 MB/s with a single 10-GB nic and multiple streams. 

These suggestions are from the CommVault website:

What are the recommended settings for EMC® Data Domain® disk libraries?

Simpana works best with EMC Data Domain disk libraries if you minimize how often Simpana accesses the disk library.

Apply all of the recommended settings to each MediaAgent that is associated with the disk library.

Reduce Access Frequency During Backup Operations

  1. From the CommCell Browser, expand Storage Resources | MediaAgents.
  2. Right-click the appropriate MediaAgent, and then click Properties.
  3. Click the Additional Settings tab.
  4. Click Add.The Add Additional Settings dialog box appears.
  5. In the Name box, type DMDontUpdateVolumeSizeDuringBackup.The Category and Type details fill automatically.
  6. In the Value box, type 1.
  7. Click OK.
  8. Click OK.

Prevent Data Pruning During Peak Backup Times

This procedure prevents physical pruning of data during peak backup times. However, it provides enough time for pruning to run outside of the peak times so that the disk library does not run out of space.

  1. From the CommCell Browser, right-click the CommServe node, and then click Properties.
  2. On the Additional Settings tab of the CommCell Properties dialog box, click Add.
  3. In the Add Additional Settings dialog box, enter the following:
    1. In the Name box, type sMMDoNotPruneInterval.The Category and Type details fill automatically.
    2. In the Value box, enter the appropriate time interval in hours.For example:To prevent pruning from 10:00 AM to 12:00 PM, and from 8:00 PM to 11:00 PM type 10-12, 20-23.To prevent pruning from 6:00 PM to 7:00 AM the next day, type 18-7.
    3. Click OK.
  4. Click OK.

Minimize the Number of Volumes on the MediaAgent and the Frequency of Volume Size Updates

  1. On the ribbon in the CommCell Console, click the Storage tab, and then click Media Management.The Media Management Configuration dialog box appears.
  2. On the Service Configuration tab, set both of the following parameters:Note: If you have upgraded from V 9.0 to the current version, the Interval between volume size update requests and the Number of volumes for size update parameters are available on the Data Aging tab.
    • For Number of volumes for size update, type 50.You can use a value other than 50, but it should be a small value.
    • For Interval between volume size update requests, type 2880.The value 2880 minutes = 2 days.
  3. Click OK.

Prevent SMB/CIFS Session Leaks on the Windows Operating System

In addition to the recommended settings for each MediaAgent, consult the following Microsoft Knowledge Base article:

"SMB/CIFS sessions leak in Windows Vista, in Windows Server 2008, in Windows 7 and in Windows Server 2008 R2",

Also make sure your on the latest OS upgrades as caching has changed from the 5.6 code levels to 5.7-6.X.  Next, thing to check is of course your network speed and that can be tested with "iperf". It's loaded on the Data Domain OS and runs via the "net iperf" command.  Download iperf from the web for windows on your server.  Then fire it up as client on one side and server on the other.  It will generate a massive transfer between the two end points for a few seconds and report max perf in MB/s, latency, and packet drops.  It your running 10-GB end to end - you should get a reading of 900-1200 MB/s.

1 Copper

Re: Optimising read performance - DD6300

Thank you for your comprehensive answer.

1) I have found a quite a few older Commvault storage policies/sub-clients i had just moved across to the DD which had either compression or encryption enabled, i've fixed these now.

2) I had already applied the Commvault recommendations

3) Thanks, i didn't know the DD had iperf, i really should go through the command reference! Anyway, it gets 8.41/8.83 Gbit/s to the windows host i was using for testing which suggests the 10Gbe is working as expected.

4) We are currently running on the units.

From the looks of it, its most likely because i didn't validate the settings on all old policies as i moved them across the Data Domains. I've remedied this now and will monitor but fingers crossed.

0 Kudos