Start a Conversation

Unsolved

This post is more than 5 years old

66230

March 5th, 2009 03:00

RAID config for MD1000 hosting VMWare

Not sure whether this belongs here or on a VMWare forum, but any comments would be welcome.

I have an MD1000 with 15*500GB SAS drives, I want to set this up for a VMWare configuration and am looking for suggestions of the inital RAID arrays.  I won't have mayn VM's, 10-15, so was planning just a few LUN's. No Exchange or SQL, mainly file storage.

Should I look at creating 1 large RAID 5/10 across all disk (barring hotspare) then carve up LUN's from there, or create individual virtual disks to match the LUN design?

Being quite new to storage, I'm not sure what is advised regarding array design, optimising the disks etc. so all comments helpful.

Many thanks,

 

847 Posts

March 5th, 2009 07:00

Maybe look at the performance tuning doccument for the MD3000 for ideas.

It's pretty darn good.

For max performance?  Putting all the disks in one disk group under raid 5 works and is easy.

But, it also means if your disk group goes down, everything goes down.   The biggest worry about large raid 5 arrays as I see it, is the time it takes to rebuild in any assigned hot swap spare,  if you have  another drive failure in that window you lose the disk group.  RAID 6 is nw supported on many Dell controller platforms just for this reason.   You will probably have to stagger your VM starts no matter how you do it.

 

 

175 Posts

March 5th, 2009 07:00

The following site has useful information on PERC/MD1000 performance:

http://www.delltechcenter.com/page/PERC6+with+MD1000+and+MD1120+Performance+Analysis+Report

5 Posts

March 5th, 2009 08:00

Ok great, thanks all for the helpful responses.

Dev Mgr...I'll take a look at RAID 10.

 

4 Operator

 • 

9.3K Posts

March 5th, 2009 08:00

I would recommend against a large raid 5 with multiple LUNs.

 

The reason is that all VMs will be hitting the same harddrives and the physical disks will become your bottleneck for sure. 2 or 3 smaller raid 5's would be a better choice.

 

Better yet is to stay away from raid 5, especially with slow (7200rpm) drives. I'd recommend going with 2 raid 10's; 6 and 8 drives or so. Then use the 15th drive as the hotspare.

 

This way you have your IO split between the physical drives, and raid 10 is much better performance when using random-natured IO (like when running multiple VMs (or also if you were running a database-type application)).

4 Operator

 • 

9.3K Posts

March 5th, 2009 08:00

It's obviously on the very opposing side of raid 5, but for some of the reasons why not to use raid 5, check this website.

 

Mainly the reasoning is:

- raid 5 takes a big performance hit when a drive fails because the missing data needs to be reconstructed on the fly as the OS is requesting the data.

- when a raid 5 is degraded (rebuilding or awaiting the replacement of the faulted drive) if 1 harddrive runs into a bad sector, you just potentially lost some data (if it was unused disk space you wouldn't lose data, it would just be marked a bad sector when the raid controller tries to write to it and move that sector, but if there was real data there it may no longer be readable).

- with large(r) and slow(er) drives, the rebuild time is much longer, so the window in which you're exposed to this risk is bigger

 

Raid 6 has double parity giving you an extra parity check if you run into a bad sector when you're in a degraded state so that data can still be recovered using the extra parity bit. However, the extra parity calculation does mean there's an extra performance hit, and you also lose 1 more drive to the 2nd parity (with raid 5 you lose 1 drive in space to the single parity)).

Raid 10 doesn't have to calculate parity; it runs multiple mirrors and then stripes the data across them, so a drive failure doesn't have to 'calculate' anything; it just starts duplicating (on a block level) the data to the hotspare or the replacement drive.

847 Posts

March 5th, 2009 09:00

"The reason is that all VMs will be hitting the same harddrives and the physical disks will become your bottleneck for sure. 2 or 3 smaller raid 5's would be a better choice."

Maybe the MD3000i SAN CTRL's make the difference but In testing this is not what happens, becuase you gian so much extra speed from all the spindles.   We did a lot of testing on this, and spindle count trumps all by far.

It actually takes physical disk bottle necking virtually out of the equation.  Reduce your spindle count as low as you are suggesting, and you will for sure make the physical disks the bottleneck.  :)

 

But we all have our own SAN / RAID stratagies, no real right or wrong answer.    I just had to point out that in extensive testing we found the opposite where strictly speaking performance is the concern on this particular matter.

To agree with you,  RAID5 stinks in most all other aspects.   We have not done any testing with RAID6 because it was not an option for us at time of implementation.

 

4 Operator

 • 

9.3K Posts

March 6th, 2009 07:00

I ran into a setup once where someone had set up an 11-disk raid 5 (7200rpm drives) with a hotspare. The setup was being used for about a dozen virtual machines on a few virtual disks. The user was complaining that in his (linux) VMs the mouse control and overall speed was sluggish.

After checking the basics we devised a change plan to move the VMs to the server's internal storage, then change the raid to a 6-disk raid 10 and a 5-disk raid 5 (to not lose as much space). After the first VMs were moved to the 6-disk raid 10, the user opted to use the 5 other disks as a 4-disk raid 10 and a hotspare as the performance benefit outweighed the disk space loss for him.

847 Posts

March 6th, 2009 08:00

Interesting and thanks for sharing......

I love hearing about real world examples.

Our testing involved all Windows VM's,, about 12 per host.   We ran multiple extremely large SQL DB backups on those SQL server VM's and then did massive file copies to the VM servers indivdually at the same time .A lot of this must come down to the O/S and Applications running on them.     We are not seeing any sluggish performance, quite the opposite.

One even a 10 disk RAID the SQL backups were drastically effected,  The file copies became slower.

 

The testing was done on MD3000i's only.  Not direct attached storage.   So we do run our MD3000i's with 14 disks in the group, all the VD's are defined in that group, One VM per VD / LUN...

 

 

 

5 Posts

March 6th, 2009 08:00

I 100% agree with JOHNADCO, thanks a lot for sharing these experiences.

I'm going to double check the information on the applications and uses, and make a decision from there, which will be a lot easier now.

Thanks all

 

June 15th, 2011 08:00

VMware HW Compatiblilty list does not list the MD 1000 and I have had zero luck when I tried. I also am open to suggestions to be able to add this array, the controller shows up

June 15th, 2011 09:00

2.04 TB is what I have

4 Operator

 • 

9.3K Posts

June 15th, 2011 09:00

You cannot make your virtual disks larger than "2048GB minus 512 bytes". Did you stay within this limit for your virtual disk size(s)?

4 Operator

 • 

9.3K Posts

June 15th, 2011 20:00

That's too large. 2TB = 2048GB... you need to be at LESS than 2TB... e.g. 1.999TB or 2047GB.

No Events found!

Top