Start a Conversation

Unsolved

This post is more than 5 years old

151125

July 19th, 2010 07:00

6500x Performance

Does anyone have experience with the 48 x 10k drive 6500x array? Currently we have four arrays. One 6000e and three 6000xv's. We need to add another array and I am contemplating the viability of the 6500x. I understand the downside to lock in on this particular unit vs. available capacity over 10k spindles but I am wondering how it would compare running raid10 to the 6000xv with 15k drives in raid50? This would accompany our esx environment, about 500vm's currently.

I am afraid that the performance would not be worth the added cost vs. the additional storage that would be made available. After how many drives does a raid10's performance no longer increment? Ie. 15? 20? 60?

Thanks

74 Posts

July 19th, 2010 09:00

Greetings.  We have a few PS6500E (7.2k SATA) and they are good for about 3,500 IOPS in RAID10 and 1,800 in RAID50.  I too am curious what the 6500X is capable of.

31 Posts

July 19th, 2010 11:00

We have 3 PS6500E (7.2k SATA) and 2 PS6500X (10k SAS), here is the estimated max IOPS from SAN headquarters which calculates from the "current configuration and IO load pattern"

(They all contain volumes configured for both VMFS datastores accessed by 14 ESXi 4U1 hosts with ~100 VMs (so far, still building it out), as well as Microsoft iSCSI initiators from both within VMs and physical machines, the majority being MS SQL database servers running on 2008 R2).

RAID 10 SATA is 3,124 IOPS,

RAID 50 SATA is 1,482 IOPS,

RAID 10 SAS is 4,698 IOPS,

RAID 50 SAS is 4,473 IOPS

These were individually, joined together in pools you can span the volumes and get much higher.

102 Posts

July 20th, 2010 09:00

I am surprised to see the sas raid 10 volume not performing with much higher iops than the sas raid 50 volume. I suppose this is in an indication of the current workload more so than performance capacity but makes one wonder if the raid 10 is even worth the loss in capacity... performance looks comparable to our 16 drive units.

31 Posts

July 20th, 2010 09:00

Yes to test what a hypothetical max performance is I use iozone, but that is why I included the current setup, these are all determining factors. You can see that if you'll be using it in a similar implementation you may not need to use up the space creating a RAID10. I am using them with the RAID10 and RAID50 in a pool together and allowing the "Auto RAID algorithm" to determine where the data might be better housed.

An example of configuration factors, our SQL dba testing using a SQL database IO test on the "SAS" pool achieved;

12 MPIO connections to a volume using MS iSCSI initiator and Dell MPIO plugin
read IOPs: 11,183
avg latency: 90
MB/s: 698.9
write IOPs: 6,763
av latency: 151
MB/s: 422.7

2 MPIO connections to a volume using MS iSCSI initiator and Dell MPIO plugin
read IOPs: 5,784
avg latency: 175
MB/s: 361.5
write IOPs: 6,273
avg latency: 162
MB/s: 392.0

74 Posts

July 20th, 2010 09:00

Again, sata units, but I agree.  my interpretation of the SANHQ statistics has always been current workload.  I use Intel's iometer to drive what I believe is peak.

24 Posts

August 19th, 2010 07:00

Not to hijack this thread, but can you describe your network configuration that is getting this performance?  What kind of switches, ethernet connections, topology, etc.?

Thanks in advance...

   Eric Raskin

31 Posts

October 13th, 2011 10:00

I wrote a simple C program to write a 1MB buffer of random data repeatedly to a single large file, opening a new file when a certain size is achieved. In theory this should be similar or slightly faster than a linux dd if=/dev/zero of=/somefile bs=1M type test.

This is on a PS6500E with 48x7200RPM SATAs

On a pair of linux hosts (12 cores) runing 4 tests in parallel with 4xgig MPIO and openiSCSI I was seeing writes in the order of 180MB/sec and reads (using a similar sequential read on 1MB blocks) of 200-220MB/sec depending on RAID level. Adding more tests in parallel did not seem to increase the total performance suggesting it was topped out.

This was linking through a pair of stacked PC6224 switches.

This did not seem to be particularly impressive - I could get to half this back in the mid 2000's with a cheapy Infortrend SATA array with 24 disks.

I accept the PS6500E might perform better with large numbers of random IOPS compared to the Infortrend. The thing that always makes me feel disapointed by these fancy expensive arrays is the poor sequential throughput given my 4x7200RPM SATA software RAID5 setup in my home server can pull 165MB/sec write and 201MB/sec read.

I really would have expected the PS6500E to saturate 4 iSCSI uplinks with 48 disks!

No Events found!

Top