Start a Conversation

Unsolved

This post is more than 5 years old

8912

April 28th, 2009 05:00

Expanding a Raid group and a LUN on a CX300 with DAE

We have a 5 * disk raid 5 raid group.  can we add a single drive to the group thus expanding its capacity?  if we can do this, will we have to do anything to the LUN(s) and or the Windows 2003 Enterprise (active/passive cluster) server?

We are looking to expand the disk capacity on our CX300 by adding 1 or more drives.  I need to know if we do have to have a seperate Raid group and LUN or if we can just expand on what we have??

Thank you.

 

 

9.3K Posts

April 28th, 2009 07:00

Yes, you can do this.

 

Go to the raid group properties and then the disk tab. Here should be an option to add extra drives to the raid group. The drive you're adding should be the same size or larger than the currently used drives, the same type (FC or SATA) and preferably the same speed (10k, 15k, or 7200rpm).

 

The disk you're adding cannot be enclosure 0 disk 0 to enclosure 0 disk 4 (flare/psm drives).

 

Once you 'ok' this migration, you can't go back or cancel. It will take several hours to a couple of days depending on the size of the disks, rotational speed of the disks, quantity of regular IO that's going on and how many disks you're adding.

 

Once the raid group has finished transitioning, the extra free space can be used for 1 or more new LUNs. If you wish to expand an existing LUN, you create a new lun with the free space (DO NOT assign to the storage group), then right click the existing lun and select the expand option (this does assume you're running Navisphere release 16 or later), select to concatenate (stripe should only be used when using 2 different raid groups that are identical in setup), pick the new lun you created, and let it finish. After this you rescan disks in disk management on both cluster nodes.

 

At this time you'll need to plan for about a 5 minute downtime window on your cluster to perform the steps in Microsoft KB article 304736. Step number 3 is when you take the cluster resources offline (and then bring up only the disk in question). On step 11 is when the cluster comes back online. This is your downtime window.

 

Obviously; make sure you have fully tested backups (on tape or so) prior to doing any of this.

No Events found!

Top