Start a Conversation

Unsolved

This post is more than 5 years old

5114

July 26th, 2017 19:00

PowerEdge VRTX Shared Storage [Redundant PERC8] extremely slow?

Has anyone been able to get the VRTX shared Storage to perform?

I have seen multiple old threads here from 2014-15 talking about removing the redundant PERC to be able to use write-back caching, but I have the current firmware installed and with it write-back and "forced write-back" are available options with the redundant controllers installed (and not disabled), but the performance of the virtual disks (For lack of a better term.) ***! I got better performance from a remote iSCSI array than local storage in the VRTX chassis is providing which is ridiculous!

Below is how the VD's are configured (either write-back or forced write-back, but neither provide acceptable performance.

Anyone that can provide some advice as to how to get this chassis and its integrated functionality to actually to be usable would be greatly appreciated!  

Presently there are 2 nodes used for a VMware 6.5 LAB and the 3rd node is being used for testing out Azure Stack TP3 all of which due to the performance of the chassis is delaying our ability to proceed forward with validating the platforms.

Moderator

 • 

6.2K Posts

July 27th, 2017 10:00

Hello

Can you provide more information about the performance. Also, can you provide more information about the disks you are using. Are they certified, SAS, SATA, SSD, RPM, interface bandwidth, etc.

Thanks

28 Posts

July 28th, 2017 09:00

Daniel,

Thanks for the reply.

For the ESX shared storage I have (4) 9W5WV Dell 1TB 7.2K 6G SAS drives in a RAID5 configuration.

The ESX hosts are M620's with 96GB of memory and dual E5-2660 cpu's and are not over committed, but the responsiveness of the virtual machines hosted are very slow in responding, especially when reading data from the virtual disks supporting them. 

Moderator

 • 

6.2K Posts

July 28th, 2017 10:00

the responsiveness of the virtual machines hosted are very slow in responding, especially when reading data from the virtual disks supporting them.

I suggest doing some type of read/write test. I'm not going to be able to help much with this information.

Thanks

28 Posts

July 28th, 2017 15:00

Daniel,

Im am waiting for the results from IOMeter, but I can tell you that I have had it running for 2 hours and it has not finished the first pass of 6 on 2 virtual drives attached.

Moderator

 • 

6.2K Posts

July 28th, 2017 16:00

13MB/s is quite slow. Those speeds are normal with sustained writes on our lower end controllers without cache, but the SPERC should be faster than that. It appears something is wrong.

Are you using Dell certified drives?
Did you install VMware using our custom ISO that contains SAS drivers specifically for the SPERC?
Are the virtual disks in shared mode or assigned only to a single node?

Thanks

28 Posts

July 31st, 2017 14:00

Yup, Yup, Yup...

using A04 (Latest Custom ISO for install.

Drives are shared across 2 hosts.

Dell certified drives.

Moderator

 • 

6.2K Posts

July 31st, 2017 17:00

Drives are shared across 2 hosts.

Have you tested with those nodes shut down to see if the cluster is what is slowing performance?

I am assuming that you do not have high utilization on the disks since this is a lab setup. Do you have little or no disk utilization on the controller aside from what you are testing?

28 Posts

July 31st, 2017 19:00

Daniel,

 

There is no other traffic.

There is one other node in the chassis and it was running Azure Stack TP3, but the performance was so terrible that I tore it down. This may be a lab but I do put it through the ringer. I run reverse proxies, multiple IIS sites, Active Directory, SQL Clustering, etc...

the M620s are 2x2660's with 64GB so I am not memory paging as I am under 30% allocated.

No Events found!

Top