Anyone aware of windows 2008 R2 SP1 needing specific setting to improve ddboost client direct performance towards a Datadomain system?
While performing ddboost client direct performance tests in a Networker 18.104.22.168 environment we notice grave differences of backup performance between windows 2008 R2 SP1 and redhat linux systems. Using a own tool that creates random data (once on datadomain networker statistics show dedupe ratio between 25% and 40% for initial backups) we notice that Windows systems (regardless of being a VM or a physical system have a backup performance between 5MBps and 40MBps (restore however on a physical is for a signle stream 140MBps), while similar data on a Linux VM or physical is between 60MBps and 200MBps.
Anyone using datadomain ddboost client direct found any requirement/setting that could improve the windows client direct backup for windows 2008 R2 SP1? Even a vanilla windows 2008 R2 SP1 shows the same behaviour. But it is not only when using Networker, also when giving the same systems a CIFS share, copying data to it isn't that much faster for Windows.
Iperf shows good performance towards the DD's (4Gbps for physicals and 2.5GBps for VM's, which is just below what these systems have set as NIC limit on the blades they run on). Copying data over the windows NICs towards eachother CIFS share is very good 200+MBps. Only slow performance when Windows is sending data to the DD for both ddboost client direct as well as CIFS share. disabling DSP (distributed segment processing, i.e. client side dedupe) on the DD has no impact on the performance. Remains slow for windows and blazing fast on Linux.
Are you using the default setup for a ddboost device as created by the NW device wizard? Or did you for instance change the ddboost device blocksize from "default handler" to another value? Or any ddboost device setting for that matter?
I left block size to default (handler). I also use dynamic nsrmmds - not sure how much that affects you - it did me for overall performance.