Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

6598

March 19th, 2009 08:00

Setting up storage, newbie questions

Hi,

I'm beginning to fear I was fed kool-aid while comparing a NetApp and EMC solution we were offered. Basically we were told that it would be a 20 minute installation of our NX4, and that we would be able to setup disks etc. as needed ourselves.

Our NX4 came with 6xSATA drives for the 4+1 with HS system drives, as well as 6 x SAS that are to be used for our databases (MSSQL). I wanted to have them setup as two RAID1 arrays with two HS's, but they had been setup as a single RAID10 array with two HSs from the factory.

Confident about the promise that we could change the factory setup (as long as we stayed away from the system drives), I destroyed the second storage pool with the RAID10 setup. I recreted two new RAID10 storage pools with two disks each, and setup the remaining two as HS. All of this was done from Navisphere as they couldn't be modified in the Celerra Manager interface (Volumes & Pools are greyed out?).

Now, although the pools have been made & initialized, they don't show up in the Celerra Manager. If I instead leave the 6 SAS disks unused in Navisphere and go to Storage -> Systems -> Configure in the Celerra Manager, I have three predefined templates I can choose from, NX4_4+1R5_HS_5+1R5, NX4_4+1R5_HS_R10_R10_R10 & NX4_4+1R5_HS_4+2R6. So basically I can choose between a 5+1R5, R10 or a 4+2R6 for our SAS disks. First of all, I need a spare SAS disk, so all options are out on this issue alone. Second - and this might be me not understanding the virtualization of the whole storage aspect - I really want two separate RAID1 volumes over a single R10 volume so we can separate our log and normal data since the usage patterns are completely different (sequential writes vs. random reads). If I just make one RAID10 volume and make a filesystem, the data will basically be striped over the RAID10 array - or am I not understanding this correctly?

Is it not possible for me to make a custom disk group, or perhaps create a custom template that fits my needs? Am I supposed to contact support each time we need to change disk configs?

8.6K Posts

March 27th, 2009 11:00

sorry - I should have said I mean Celerra system-defined storage pool

The NaviExpress language of also using the term pool makes it confusing

all the RAID1/0 LUNs that the Celerra see's will get associated as dvols with the clarsas_r10 Celerra system-defined pool

From there - if you use AVM then it will try to find 4,3,2,1 dvols to stripe
see the attached manual on page 27

so in your case - if you assign two 1+1 RAID1/0 to the Celerra than AVM will stripe between them and you automatically get the I/O performance of all four disks

Of you can use manual volume management and stripe to your liking manually

1 Attachment

March 27th, 2009 11:00

Thanks!

After deleting the missing the unbound disks from the nas through nas_disk, I was able to create new RAID groups and have them discovered by the Celerra. This also cleared the errors I was getting through nas_checkup.

So the procedure to delete a disk pool / LUN is to first delete the disk through nas_disk, and then unbind it from Navisphere afterwards I assume.

March 27th, 2009 11:00

Since the striping is only for members within a specific disk pool, and I'm unable to create a RAID10 group with more than 2 disks, am I mistaken in believing that I'll never get better write performance out of a LUN than a 2 disk RAID1/0 within a single disk pool?

8.6K Posts

March 27th, 2009 12:00

yes - first make sure that the LUNs (dvols) are really no longer in use by anything on the Celerra, then delete the dvols on the Celerra and then remove them from the Clariion

good to hear it works now

I'd like to understand why it didnt work initially ?

did you create RAID1/0 with more than two disks ?

or did the still existings but not working dvols on the Celerra prevent the detection of the new LUNs ?

March 27th, 2009 21:00

Alright, that clears it up, thanks. Didn't make sense otherwise as well :)

There's definitely plenty opportunity for confusion when the Celerra/Navisphere Express/Manager terms get mixed up with equal terms for different entities, and different terms for equal entities. I'm sure that it's all described in the various manuals, but even then, separating terms seems tricky, until you've got it.

I've successfully been able to create the two RAID1/0 groups and have them added to the clarsas_r10 syspool. Will setup a user defined pool for keeping the filesystems separate RAID group wise so we don't mix log & data IO.

March 27th, 2009 21:00

I think this is the events that caused the config issues:

1. I deleted the 2xRAID1/0 groups since these figured as a single storage group in Celerra, and I figured it was a classic RAID10.
2. Created two new 2 disk RAID1/0 pools in Navisphere
3. As disks weren't unbound correctly in step 1, rescan failed
4. If 6-11 were unused in Navisphere, I could reconfigure from Celerra which seemed to clear up the unbound disks, but didn't fit our disk layout needs.

So basically, once the unused disks were unbound from Celerra correctly, I was able to rescan & manage disks as expected.

1 Message

September 14th, 2010 03:00

Just wanted to express my thanks for all the advice given in this thread. Very similar issues . Lots of tips and helpful hints that led me to a resolution. Thanks guys.

Regards

No Events found!

Top