Powervault 3200i and 3220i, sequential reads slower than writes ?
I'm having some doubts regarding numbers coming out of the tests I'm running on two powervault iscsi storages.
1xMD3220i with dual controllers and 14x800GB10KSAS disks, one RAID5 volume (XFS, 4K sector size)
1xMD3200i with dual controllers and 12x4TB7.2KSAS disks, one RAID5 volume (XFS, 4K sector size)
Both storages are connected with cross cables to a PowerEdge R720 with two Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz 64GB RAM through four Broadcom quad gigabit cards (BCM95719)
The server is running on ubuntu 12.04.4 LTS
I have configured the two volumes without any issues and have had no trouble with multipathing, I can easily reach >500MB/s of throughput in sequential writes on both volumes.
When it comes to sequential reads I can get about 200MB/s on the 3220i and about 150MB/s on the 3200i, and I can't really understand why ... I always assumed read throughput would be higher than write throughput.
I have checked and multipath is in use also for reads, it just tops off at about 50MB/s for the 3220i (or about 38MB/s for the 3200i) on each network interface. It looks like I am hitting a limit on the storage side, I am just stumped as to why I am getting better performance for the writes ...
I have tested both with bonnie++ (bonnie++ -u tgs -d ./b -c 3 -n 100) and dd (dd if=/media/db/fast/01/bench/dd.dat of=/dev/null bs=4096k count=65M) , with datasets that should be big enough for the controller cache not to be a factor ... buffer size on the storages is 32K, jumbo frames are enabled at 9000 on both sides
Just wondering, is it correct to assume I'm hitting the storage limit on reads, and if so, what is allowing the writes to be 2.5x times faster ?
I have tried fiddling with multipath parameters and/or the NICs settings (mainly tcp offload and rx and tx buffers size) to no avail. When running the tests cpu load is under control, the tests yeld the same results if I run multiple instances of dd .