Start a Conversation

Unsolved

This post is more than 5 years old

M

3645

March 31st, 2017 14:00

ScaleIO RDMA

Hello,

Although we love VMware and ScaleIO, the storage performance in vSphere is lagging far behind Windows Storage Spaces Direct and Systems Center.

Could anyone at EMC indicate if and when ScaleIO will finally leverage RDMA (or NVMeF) and support vSphere 6.5 ?

Thank you.

110 Posts

April 3rd, 2017 16:00

Hi Melville 

 

I would recommend opening a support ticket to determine why you are experiencing such poor performance.

 

To answer your other questions.

NVMe can be used by ScaleIO now.

We don't have plans currently related to RDMA.

vSphere 6.5 is coming very soon. Expect to see an announcement later this month.

 

Thank you,

Jason

3 Posts

April 4th, 2017 00:00

Hi again,

We are currently evaluating the product so I don't think it is possible to open a ticket at the moment. We also have to decide on the converged infrastructure which we have to recommend to our clients. Some are already using vSphere 6.5.

Just wanted to mention that scaling is not the issue. Performance on distributed load is good in aggregate but single VM performance is far less than what the hardware is seemingly able to offer.


We have performed some tests last year with an RDMA-enabled scale out solution on KVM (all flash configuration, 40 GbE), which gave us much lower latency and at least 4 to 5 times the throughput in single-threaded loads than on vSphere 6.0, with of course much more CPU cycles available for the compute loads.


Back to the questions:

- You mean that ScaleIO support for vSphere 6.5 is expected later this month ?

- PVRDMA virtual adapters will not be used in the coming ScaleIO versions ?

- NVMe over Fabrics is or will be supported ?

Thank you !

110 Posts

April 4th, 2017 14:00

Interesting. Can you tell me more about your configuration?

Yes, 6.5 support is coming this month.

What is the use case/configuration you see for using NVMe over Fabrics with ScaleIO?

Looking into the PVRDMA feature, it looks to only apply to VMs that use RDMA. ScaleIO works over IP networks.

3 Posts

April 5th, 2017 08:00

The current test configuration is 4 to 8 Skylake-based hosts, FusionIO PCIe, DC S3700 SSDs and Intel DC P3700 NVMe 2.5" drives if the platform supports NVMe Over Fabrics, Mellanox ConnectX-3 40GbE, Mellanox 40GbE switching.

This is a PoC for running the following in a hyperconverged ecosystem:

- vSphere, KVM or Hyper-V virtualization platform

- Active Directory

- Exchange Server

- Microsoft Clustered File Services

- VMware Horizon or Citrix VDI

- Remote Desktop Services

- Oracle Databases

- ERP system

PVRDMA does apply to VMs but ScaleIO SDS is virtualized in vSphere.

The other option if it supported RDMA  would be to completely dedicate and passthrough an RDMA-capable adapter to each SDS VM.

Don't you have a substantial CPU and latency tax by going through the whole TCP stack ?

159 Posts

April 21st, 2017 03:00

The Performance o Soaces DIrect is only Valid if ypu stay inside teh WIndows Ecosystem.

RDMA is used to solve Latency Issues with SMB ( wich adds the Latency and CPU Penalty normally ) and also to enable Virtual Storagepath to remote devices ( Again, Over SMB )

Also, other features like SMBCACHE are used to get the Performance.

Try Test the Same thing on Hyper-V with Scaleio and compare the numbers, not compare VSphere Scaleio with Hyper-V S2D

The Right Compare should be S2D vs. vSAN.

If you need both worlds, than ScaleIO is the Perfect fit for your Usecase, but do not expect feature like RDMA or even SMB to be integrated into ScaleIO becasue S2D needs them ...

16 Posts

June 28th, 2017 09:00

ScaleIO 2.0.1.3 now supports ESX 6.5.

https://elabnavigator.emc.com/vault/pdf/ScaleIO_ESSM.pdf

33 Posts

January 25th, 2018 05:00

Hey Melville ,

what solution did you end up with?

I am looking at a similar setup (smaller scale) and would be interested what you decided on

 

Thanks,

regards

No Events found!

Top