45matai
1 Copper

Very slow backups to Data Domain 640

We're getting very slow backups to our Data Domain 640.  We're getting an average transfer rate of 2-3MB/s.  We're using Veeam7 to backup, here are some of the bottleneck reports:

Load: Source 10% > Proxy 9% > Network 0% > Target 88%

Load: Source 20% > Proxy 14% > Network 2% > Target 98%

Load: Source 63% > Proxy 23% > Network 31% > Target 68%

Load: Source 8% > Proxy 20% > Network 0% > Target 99%

We're backing up from a Nutanix cluster to the Data Domain.  We're using NFS and a Linux mount.  We're only on 1GbE for the Data Domain and we don't have BOOST yet.  I was reading online that people with a similar setup are getting a speed over 100MB/s.

Does anybody have any ideas?

Thanks!

0 Kudos
3 Replies
mkurowski
1 Copper

Re: Very slow backups to Data Domain 640

Is nolock enabled? Receive/Window sizes @ 32768? Have attribute caching turned off or on?

Can you post your mount options and uname/release info? DDOS info?

-

Matthew Kurowski

Opinions/statements expressed herein are not necessarily those of my company.

0 Kudos
Highlighted
45matai
1 Copper

Re: Very slow backups to Data Domain 640

I'm not sure, I didn't set up the Linux machine.  Are there instructions on how to set it up and configure it?

0 Kudos
mkurowski
1 Copper

Re: Very slow backups to Data Domain 640

(Sorry for the slow reply but I've been tied up with work. This is my personal account but I'm an EMC Partner and we had some pressing professional services engagements.)

I like having more information pulled and to establish a performance profile but in general you can use the following as a baseline configuration: This assumes much (compression, streams, schedules, software versions etc).

Settings:

  • rsize/wsize tuned: Make sure you are at least at a 32 KB read and right size (rsize and wsize) as that is supported broadly across Linux versions/kernels. If you are using NFS v4, that size should be the default; however, v3 defaults to 8K. Specifying the default size on a v4 system doesn't hurt and it's possible the default was changed.
  • Hard: If the system/application is best suited to wait for server response rather than report an error.
  • NFS v3: v3 has been more stable in my experience so is "safer" to set. Again, I don't know much of your environment.
  • Interrupts: Used with hard mounting if you want to have interruptible NFS requests

EMC Document 30124 in regards to stale handles troubleshooting provides an example: mount -F nfs -o hard,intr,vers=3,proto=tcp,rsize=1048600,wsize=1048600 rstr01:/ddvar /ddr/ddvar

Notice:

  • high read/write sizes
  • intr used with hard
  • forcing NFS v3

Even if the settings above improve performance to an acceptable level, you may want to do a deep dive analysis/tuning. You can also have an independent Health Check customized too.

Let me know how things go.

Cheers,

Matthew

-

Matthew Kurowski

//The postings on this site are my own and don't necessarily represent my company's positions, strategies or opinions.

0 Kudos