VxFlex OS (ScaleIO)

2 Bronze



Although we love VMware and ScaleIO, the storage performance in vSphere is lagging far behind Windows Storage Spaces Direct and Systems Center.

Could anyone at EMC indicate if and when ScaleIO will finally leverage RDMA (or NVMeF) and support vSphere 6.5 ?

Thank you.

Replies (7)
3 Argentum

Hi Melville

I would recommend opening a support ticket to determine why you are experiencing such poor performance.

To answer your other questions.

NVMe can be used by ScaleIO now.

We don't have plans currently related to RDMA.

vSphere 6.5 is coming very soon. Expect to see an announcement later this month.

Thank you,


Hi again,

We are currently evaluating the product so I don't think it is possible to open a ticket at the moment. We also have to decide on the converged infrastructure which we have to recommend to our clients. Some are already using vSphere 6.5.

Just wanted to mention that scaling is not the issue. Performance on distributed load is good in aggregate but single VM performance is far less than what the hardware is seemingly able to offer.

We have performed some tests last year with an RDMA-enabled scale out solution on KVM (all flash configuration, 40 GbE), which gave us much lower latency and at least 4 to 5 times the throughput in single-threaded loads than on vSphere 6.0, with of course much more CPU cycles available for the compute loads.

Back to the questions:

- You mean that ScaleIO support for vSphere 6.5 is expected later this month ?

- PVRDMA virtual adapters will not be used in the coming ScaleIO versions ?

- NVMe over Fabrics is or will be supported ?

Thank you !

Interesting. Can you tell me more about your configuration?

Yes, 6.5 support is coming this month.

What is the use case/configuration you see for using NVMe over Fabrics with ScaleIO?

Looking into the PVRDMA feature, it looks to only apply to VMs that use RDMA. ScaleIO works over IP networks.

The current test configuration is 4 to 8 Skylake-based hosts, FusionIO PCIe, DC S3700 SSDs and Intel DC P3700 NVMe 2.5" drives if the platform supports NVMe Over Fabrics, Mellanox ConnectX-3 40GbE, Mellanox 40GbE switching.

This is a PoC for running the following in a hyperconverged ecosystem:

- vSphere, KVM or Hyper-V virtualization platform

- Active Directory

- Exchange Server

- Microsoft Clustered File Services

- VMware Horizon or Citrix VDI

- Remote Desktop Services

- Oracle Databases

- ERP system

PVRDMA does apply to VMs but ScaleIO SDS is virtualized in vSphere.

The other option if it supported RDMA  would be to completely dedicate and passthrough an RDMA-capable adapter to each SDS VM.

Don't you have a substantial CPU and latency tax by going through the whole TCP stack ?

Hey Melville,

what solution did you end up with?

I am looking at a similar setup (smaller scale) and would be interested what you decided on



3 Argentum

The Performance o Soaces DIrect is only Valid if ypu stay inside teh WIndows Ecosystem.

RDMA is used to solve Latency Issues with SMB ( wich adds the Latency and CPU Penalty normally ) and also to enable Virtual Storagepath to remote devices ( Again, Over SMB )

Also, other features like SMBCACHE are used to get the Performance.

Try Test the Same thing on Hyper-V with Scaleio and compare the numbers, not compare VSphere Scaleio with Hyper-V S2D

The Right Compare should be S2D vs. vSAN.

If you need both worlds, than ScaleIO is the Perfect fit for your Usecase, but do not expect feature like RDMA or even SMB to be integrated into ScaleIO becasue S2D needs them ...

2 Bronze

ScaleIO now supports ESX 6.5.


Top Contributor
Latest Solutions