deanr1
1 Copper

Re: Setting up storage, newbie questions

Jump to solution
Rainer has provided you with excellent advice on your issues and I hope EMC support has helped you resolve this already.

Just a couple of comments:

1) The system should ship from the factory without disks 6-11 bound at all. Since Celerra only supports 1+1 RAID 1/0 I don't have any idea where your second RAID group came from.

2) Rainer is correct. Celerra doesn't support user LUNs with host ID less than 16, however NaviExpress doesn't allow the user control over host ID assignment and uses the next available id. The Rescan operation (sometimes referred to as diskmark) will remove the LUN from the Celerra storage group and re-add it at the first available ID >= 16.

3) nas_checkup is reporting the issue as it looks at what Dart currently scans on the Fibre and since your Rescan is evidently failing the ID's are not re-assigned an they are still seen by Dart as 6 and 7. They don't appear in the NAS configuration yet because Rescan has not successfully discovered them.

If the GUI rescan is not giving a useful error message you may want to try the following command from the shell prompt

nas_diskmark -mark -all

This is what Rescan does under the covers.

Hopefully you issue has been resolved quickly via support. If not please push them to escalate.

View solution in original post

0 Kudos
Peter_EMC
3 Zinc

Re: Setting up storage, newbie questions

Jump to solution
I don´t understand where this started from.

From a Celerra point of view Raid1 is only supported on the CX and CX3 Backends.

on the newer backends (CX4, AX4-5) Celerra is supporting Raid1/0 with 2 Disks instead, which is the same as Raid1 with 2 Disks.
Raid1/0 with more than 2 Disks is not supported for Celerra


Regards
Peter
0 Kudos
MarkSRasmussen
1 Copper

Re: Setting up storage, newbie questions

Jump to solution
Hi deanr,

Thanks for replying.

I'm still awaiting an update on the SR case after I've posted my last comment.

1)
That sounds odd, through the sales process I was asked what config I wanted, for them to set it up from the factory. We started out wanting a 4 disk RAID10, but later changed it to a 2x2 disk RAID1, which is probably where things went wrong.

3)
nas_diskmark -mark -all gives the following error:
Error 5017: storage health check failed
SL7E9084400002 d9, no storage API data available
SL7E9084400002 d10, no storage API data available

Can these be remnants of the original secondary disk group that wasn't completely cleaned up, and thus haunting us now?
0 Kudos
MarkSRasmussen
1 Copper

Re: Setting up storage, newbie questions

Jump to solution
A RAID10 with two disks, is that not similar/identical to a RAID1 in regards of structure and performance characteristics?

So what you're saying is that if I, from Navisphere, create a "RAID1/0" disk pool with more than two disks, the Celerra won't support it? So in effect I can't create a 4-disk RAID10, effectively limiting write performance to that of a single RAID1? Or will it still stripe the file system across disk pools within the same performance profile, and thus give me equal performance to a "true" RAID 1/0?

Thanks!
0 Kudos
deanr1
1 Copper

Re: Setting up storage, newbie questions

Jump to solution
Yes,

This storage check error is indicative of the fact that the LUNs in the old RAID group were unbound via NaviExpress without having removed them from the NAS configuration database via nas_disk -delete <disk-name> prior to the unbind.

Generally you can just do nas_disk -delete <disk-name> on those 2 disks to remove the old NAS disks and then rerun nas_diskmark -m -a to discover the new ones.

This case has escalated to me internally so our escalation engineer will be contacting you shortly to walk you thru this.
Rainer_EMC
5 Osmium

Re: Setting up storage, newbie questions

Jump to solution
A RAID10 with two disks, is that not similar/identical to a RAID1 in regards of structure and performance characteristics?


yes it is

I think the confusion is because on older Clariion there was a RAID1 (only 2 disks) and RAID1/0 (2,4,8,... disks) config option
These days you cant create a RAID1 - it will just create a RAID1/0 with two disks

So what you're saying is that if I, from Navisphere, create a "RAID1/0" disk pool with more than two disks, the Celerra won't support it?


correct

for Celerra we prefer to work with two-disk RAID1/0 LUNs and then use the Celerra AVM volume manager to stripe across multiple of them

So in effect I can't create a 4-disk RAID10, effectively limiting write performance to that of a single RAID1?
Or will it still stripe the file system across disk pools within the same performance profile, and thus give me equal performance to a "true" RAID 1/0?


correct - we just do the striping on the Celerra side - so you get the same performance but more flexibility
technically speaking we stripe within the members of a pool - not across pools
0 Kudos
MarkSRasmussen
1 Copper

Re: Setting up storage, newbie questions

Jump to solution
Thanks!

After deleting the missing the unbound disks from the nas through nas_disk, I was able to create new RAID groups and have them discovered by the Celerra. This also cleared the errors I was getting through nas_checkup.

So the procedure to delete a disk pool / LUN is to first delete the disk through nas_disk, and then unbind it from Navisphere afterwards I assume.
0 Kudos
MarkSRasmussen
1 Copper

Re: Setting up storage, newbie questions

Jump to solution
Since the striping is only for members within a specific disk pool, and I'm unable to create a RAID10 group with more than 2 disks, am I mistaken in believing that I'll never get better write performance out of a LUN than a 2 disk RAID1/0 within a single disk pool?
0 Kudos
Rainer_EMC
5 Osmium

Re: Setting up storage, newbie questions

Jump to solution
sorry - I should have said I mean Celerra system-defined storage pool

The NaviExpress language of also using the term pool makes it confusing

all the RAID1/0 LUNs that the Celerra see's will get associated as dvols with the clarsas_r10 Celerra system-defined pool

From there - if you use AVM then it will try to find 4,3,2,1 dvols to stripe
see the attached manual on page 27

so in your case - if you assign two 1+1 RAID1/0 to the Celerra than AVM will stripe between them and you automatically get the I/O performance of all four disks

Of you can use manual volume management and stripe to your liking manually
0 Kudos
Rainer_EMC
5 Osmium

Re: Setting up storage, newbie questions

Jump to solution
yes - first make sure that the LUNs (dvols) are really no longer in use by anything on the Celerra, then delete the dvols on the Celerra and then remove them from the Clariion

good to hear it works now

I'd like to understand why it didnt work initially ?

did you create RAID1/0 with more than two disks ?

or did the still existings but not working dvols on the Celerra prevent the detection of the new LUNs ?
0 Kudos