Start a Conversation

Unsolved

This post is more than 5 years old

O

994

March 3rd, 2016 05:00

ScaleIO sds_network_test_results vs iperf

Hi

I'm getting pretty low performance from my query_sds_network_test_results  (my best result seen):

SDS 192.168.1.114-ESX returned information on 2 SDSs

    SDS 5d139c4700000000 192.168.30.202 bandwidth 382.1 MB (391259 KB) per-second

Whereas Iperf worst score I've given of tons of attempts: (often its as high as 8.5Gbits/sec)

[ ID] Interval           Transfer     Bandwidth      

[  4]   0.00-60.00  sec  46.4 GBytes  6.64 Gbits/sec               sender

[  4]   0.00-60.00  sec  46.4 GBytes  6.64 Gbits/sec                  receiver

The SDS test seems to be presenting the same result I see when using a VMWare Datastore on top of my scaleIO volume, which is a set of 1 single SSD's in each host, running ioperf on it. Which is this:

1) much better reallife measurements than otherhardware connected to the same Backend, because in reallife, the bottleneck is the storage: in this case the SSD is much faster than the 25Drive HDD array.

2) slighly worse read performance, because scaleIO is hitting network limit of between 250-380MBytes/sec, so I see around 270MBytes/sec in ioperf

3) much worse write performance, because the writes are being copied to other storage, so it ends up being about 50% of the write performance. I know the SSD can do more, and the SDS network test result is clearly showing some issue, but iperf running of the SVM itself gets 825MBytes/sec

Any suggestions?

No Responses!
No Events found!

Top