Unsolved
This post is more than 5 years old
4 Posts
0
4358
Optimising read performance - DD6300
We have a couple of new DD6300 configured with around 100TB for backup storage. We are quite new to DataDomains and we are probably doing something wrong with our configuration. The read performance is currently too low to be practical. I'm having difficulty determining what we should be seeing as there doesn't seem to be much documentation on actual performance expectations for each configurable, so i'm asking this forum for help.
At the moment, Configured as a CIFS File share, it has LZ compression enabled, 20% random, 80% sequential and a single 10GBe connection to our network. I have about 16TB of data on the DD, of random types, output from our backup system (Commvault). Normal case, if i copy using Windows Explorer, i will see about 6-12MB/s read speed from the DataDomain share. Best case scenario (i'm not sure the situation that arises that allows this) i'll see 40-50MB/s copying the same files in Windows Explorer from the Data Domain.
I need to configure these devices to achieve around ~100MB/s read performance in normal operation to make restores practical. Any compression is a bonus once we have established an acceptable read speed.
Should i turn off LZ compression? Do i need to install DDBoost somewhere? Is there a way to see what the Data Domain is doing? I tried system show performance, it shows 11.9 MiB/s with 113 iOPS, 47ms read protocol latency and 4 read streams. Every other column is 0.00. the CPU charts show 0-5% CPU load at all times.
rugby01
85 Posts
1
March 2nd, 2018 05:00
Are you doing LZ before sending to the data domain, if so - stop it. Any compression on the client files before sending to the DD is counter productive and reduces the backup and recovery speed, and it also reduces deduplication to zero. For DD6300 performance on windows, single stream performance is rated around 300 MB/s per stream, and you should be able to hit around 1200 MB/s with a single 10-GB nic and multiple streams.
These suggestions are from the CommVault website:
What are the recommended settings for EMC® Data Domain® disk libraries?
Simpana works best with EMC Data Domain disk libraries if you minimize how often Simpana accesses the disk library.
Apply all of the recommended settings to each MediaAgent that is associated with the disk library.
Reduce Access Frequency During Backup Operations
Prevent Data Pruning During Peak Backup Times
This procedure prevents physical pruning of data during peak backup times. However, it provides enough time for pruning to run outside of the peak times so that the disk library does not run out of space.
Minimize the Number of Volumes on the MediaAgent and the Frequency of Volume Size Updates
Prevent SMB/CIFS Session Leaks on the Windows Operating System
In addition to the recommended settings for each MediaAgent, consult the following Microsoft Knowledge Base article:
"SMB/CIFS sessions leak in Windows Vista, in Windows Server 2008, in Windows 7 and in Windows Server 2008 R2", http://support.microsoft.com/kb/2537589
Also make sure your on the latest OS upgrades as caching has changed from the 5.6 code levels to 5.7-6.X. Next, thing to check is of course your network speed and that can be tested with "iperf". It's loaded on the Data Domain OS and runs via the "net iperf" command. Download iperf from the web for windows on your server. Then fire it up as client on one side and server on the other. It will generate a massive transfer between the two end points for a few seconds and report max perf in MB/s, latency, and packet drops. It your running 10-GB end to end - you should get a reading of 900-1200 MB/s.
iss-operations
4 Posts
0
March 4th, 2018 15:00
Thank you for your comprehensive answer.
1) I have found a quite a few older Commvault storage policies/sub-clients i had just moved across to the DD which had either compression or encryption enabled, i've fixed these now.
2) I had already applied the Commvault recommendations
3) Thanks, i didn't know the DD had iperf, i really should go through the command reference! Anyway, it gets 8.41/8.83 Gbit/s to the windows host i was using for testing which suggests the 10Gbe is working as expected.
4) We are currently running 6.0.1.10-561375 on the units.
From the looks of it, its most likely because i didn't validate the settings on all old policies as i moved them across the Data Domains. I've remedied this now and will monitor but fingers crossed.