Start a Conversation

Unsolved

This post is more than 5 years old

34654

April 24th, 2009 05:00

MD3000: two beginner questions

Hi all!

1. Is it possible to expand a VirtualDisk? I dind't find any information about resizing and Dell Techical support staf don't know :-\

2. I add a new physical disk (1TB) and  then I add the new disk to an existent group disk. But inizialition takes a lot of time with only 1 disk (more disks = more time). Is this situation normal? I can't find a faster way to do that.

thanks in advance

4 Operator

 • 

9.3K Posts

April 24th, 2009 07:00

1. Yes it's possible, but you have to use smcli to do this. Check the "Command line interface guide". Look for the "addCapacity" command.

 

2. Yes this is normal; the array has to redistribute all the data across the new number of drives. So if for instance you're going from 3 to 4 drives (in a raid 5), the system has to move 3 drives' worth of data (parity needs to be recalculated and then moved, so this data needs re-location as well) while still serving your regular data access needs. So during the window when this change is occuring you're taking a performance hit on the production virtual disk(s) and the more IO there is going to the production virtual disks, the longer the restripe will take. Also, once you start this process you can't stop it or cancel it.

 

Note: I'd recommend raid 6 over raid 5 any day, especially when using large and slow (7200rpm) disks. If your choice is between "raid 5 with a hotspare" or "raid 6 with no hotspare", I'd go for the raid 6 option. The reason is the following; when a drive has a bad sector or so in a (fully functional) raid 5, the raid parity can regenerate the missing data and put it in one of the spare sectors on the drive. However, when the raid set loses a drive and is in a degraded state (rebuilding to a hotspare, rebuilding to a replacement drive, or still waiting for the faulted drive to be replaced) and a drive runs into a bad sector, you may end up with an invalidated raid parity in one of the stripes. This could mean some data is lost (if the server using that disk space had data there; if it was blank space when the server tries to write the drive will remap the data to a spare sector). The slower the drive the longer the rebuild takes, and often the slower drives are also larger, so the rebuild takes even longer. Also, obviously, the larger the drive the more sectors there are, and the larger the chance that you run into a bad sector during the degraded-state-time. With raid 6 when 1 drive fails you're still redundant and a bad sector can still fully be rebuilt with the 2nd parity data. You could even withstand a 2nd drive failure and still be online (but now you'd be degraded like a single drive failure on a raid 5) if it came down to it.

 

On the MD3000/MD3000i raid 6 support was introduced with the 2nd generation firmware (07.xx.xx.xx).

847 Posts

April 24th, 2009 10:00

Just know there is also some performance hits with raid6 -vs- raid5.

I did some testing recently on this, and I think we are going to deal with the raid 5 on most of our diskgroups.  We have a lot of redundancy going on.   Even if we lost the disk group completely on one san,  it should be OK for us.

 

As stated above, the long rebuild time is pretty normal.   We did recent drive fail tests, performance on the san in general and the effected disk group in particular remained decent surprisingly enough.

4 Operator

 • 

9.3K Posts

April 24th, 2009 15:00

Just know there is also some performance hits with raid6 -vs- raid5.

I did some testing recently on this, and I think we are going to deal with the raid 5 on most of our diskgroups.  We have a lot of redundancy going on.   Even if we lost the disk group completely on one san,  it should be OK for us.

 

As stated above, the long rebuild time is pretty normal.   We did recent drive fail tests, performance on the san in general and the effected disk group in particular remained decent surprisingly enough.

Do you happen to have any numbers on the performance difference? I haven't really had a chance/opportunity to do comparisons, but on EMC Clariion I've read that the performance difference could be as much as 30%.

2 Posts

April 27th, 2009 00:00

First of all, thak you very much. Doubts are solved now :)

About resizing VD, is it dangerous? A Dell Support guy told us that is not a supported setup and if we resize any VD data could be lost and then Dell could do nothing.

 

Thanks

4 Operator

 • 

9.3K Posts

April 27th, 2009 07:00

As Dell provides the documentation and capabilities to grow a virtual disk, Dell should support it. However, they may not be able to walk you through it as there is a risk (if they were to walk your through it and something were to go wrong you may consider them responsible, which support departments tend to steer clear of).

 

The process is designed to be done without losing data.

 

However, if disaster were to strike (e.g. in the form of a complete poweroutage), the risk that there's damage to your data is higher that if you weren't right in the middle of resizing your virtual disk. And like all computer companies; they always have in their warranty/support contracts that they aren't responsible if your data is lost for whatever reason.

847 Posts

April 27th, 2009 09:00

I have clocked RAID5  -vs- RAID 6 times using the same spindle count, on operations like SQL database backups / restores.   And our imaging system 200million + image stor backup.  One of those sql db's is 30gb+

30% would seem about right on those two items.  

   

 

41 Posts

January 4th, 2010 21:00

Does anyone know the exact process in Redhat Linux?

No Events found!

Top