Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

6591

March 19th, 2009 08:00

Setting up storage, newbie questions

Hi,

I'm beginning to fear I was fed kool-aid while comparing a NetApp and EMC solution we were offered. Basically we were told that it would be a 20 minute installation of our NX4, and that we would be able to setup disks etc. as needed ourselves.

Our NX4 came with 6xSATA drives for the 4+1 with HS system drives, as well as 6 x SAS that are to be used for our databases (MSSQL). I wanted to have them setup as two RAID1 arrays with two HS's, but they had been setup as a single RAID10 array with two HSs from the factory.

Confident about the promise that we could change the factory setup (as long as we stayed away from the system drives), I destroyed the second storage pool with the RAID10 setup. I recreted two new RAID10 storage pools with two disks each, and setup the remaining two as HS. All of this was done from Navisphere as they couldn't be modified in the Celerra Manager interface (Volumes & Pools are greyed out?).

Now, although the pools have been made & initialized, they don't show up in the Celerra Manager. If I instead leave the 6 SAS disks unused in Navisphere and go to Storage -> Systems -> Configure in the Celerra Manager, I have three predefined templates I can choose from, NX4_4+1R5_HS_5+1R5, NX4_4+1R5_HS_R10_R10_R10 & NX4_4+1R5_HS_4+2R6. So basically I can choose between a 5+1R5, R10 or a 4+2R6 for our SAS disks. First of all, I need a spare SAS disk, so all options are out on this issue alone. Second - and this might be me not understanding the virtualization of the whole storage aspect - I really want two separate RAID1 volumes over a single R10 volume so we can separate our log and normal data since the usage patterns are completely different (sequential writes vs. random reads). If I just make one RAID10 volume and make a filesystem, the data will basically be striped over the RAID10 array - or am I not understanding this correctly?

Is it not possible for me to make a custom disk group, or perhaps create a custom template that fits my needs? Am I supposed to contact support each time we need to change disk configs?

5 Practitioner

 • 

274.2K Posts

March 26th, 2009 05:00

Rainer has provided you with excellent advice on your issues and I hope EMC support has helped you resolve this already.

Just a couple of comments:

1) The system should ship from the factory without disks 6-11 bound at all. Since Celerra only supports 1+1 RAID 1/0 I don't have any idea where your second RAID group came from.

2) Rainer is correct. Celerra doesn't support user LUNs with host ID less than 16, however NaviExpress doesn't allow the user control over host ID assignment and uses the next available id. The Rescan operation (sometimes referred to as diskmark) will remove the LUN from the Celerra storage group and re-add it at the first available ID >= 16.

3) nas_checkup is reporting the issue as it looks at what Dart currently scans on the Fibre and since your Rescan is evidently failing the ID's are not re-assigned an they are still seen by Dart as 6 and 7. They don't appear in the NAS configuration yet because Rescan has not successfully discovered them.

If the GUI rescan is not giving a useful error message you may want to try the following command from the shell prompt

nas_diskmark -mark -all

This is what Rescan does under the covers.

Hopefully you issue has been resolved quickly via support. If not please push them to escalate.

8.6K Posts

March 19th, 2009 09:00

Hi Mark,

welcome to the forum.

Yes, it is certainly possible to create a custom raid config and not just use the templates.

I can only guess that you might have gotten a little thing missing when creating the LUNs with NaviSphere - most likely you didnt set the Host ID larger than 16

Please take a look at Powerlink
Home > Support > Product and Diagnostic Tools > Celerra Tools > NX4, NX4FC
then "Step 5: Configure for Production" and "Perform Common Post-CSA tasks" to "Configure additional storage for your integrated system"
http://corpusweb130.corp.emc.com/ns/common/post_install/Config_storage_FC_NaviManager.htm

There you'll find step-by-step instructions

139 Posts

March 19th, 2009 11:00

You must have missed a step in creating the raid groups and luns. If Celerra presents you with templates, then it is not seeing those new luns.

Celerra can do custom raid groups, but it will only accept certain types of R1, R5, R6 configurations. For example, if you created a R5 with 3 disks, it will not see it. But I don't think you will have that issue with your R1 config.

Volumes and pools are greyed out because you don't have the Advanced celerra Manager. If you go under licenses there should be a check box to enable it. Then, you can create custom pools. But to create custom pools, Celerra must first see the luns.

1.5K Posts

March 19th, 2009 12:00

Hi,

Welcome to the EMC Support forums. Rainer had already provided you the required details and the link to find step by step process.

Also - you may find few more old forum posts on this related subject - one of the recent one is as follows -

http://forums.emc.com/forums/thread.jspa?threadID=111450&start=0&tstart=20

I am sure you 'll have a good time with your new NX4. Wish to have your valued contribution to the forums as well. :)
Regards,
Sandip

March 20th, 2009 01:00

Thank you all of you, for your help so far!

I'm seeing a pattern in your posts indicating that I'm not using the right tools. In Rainer's link it mentions Navisphere Manager, and there's mention of RAID groups & LUNs.

I'm obviously using the Celerra Manager interface, but without the option of doing any configuration in here (we don't have the advanced license). For the Navisphere setup, I've installed the "Navisphere Service Taskbar", but that only seems for initial hardware registration & for installing new DAE etc. If I browse to the IP's of any of our SPs, I'm presented with the "Navisphere Express" web interface - this is where I've been setting up my disk configs.

The Navisphere Express interface works with Disk Pools & Virtual Disks, not RAID groups & LUNs. When creating virtual disks (which much be LUNs?), I can set no other options than the virtual disk name, so I'm not really sure where to set the Host ID.

Should I be using, and have I completely overlooked the "Navisphere Manager" software on my setup CD's?

8.6K Posts

March 20th, 2009 02:00

now we are getting closer :-)

sorry - I didnt realize you are using Navi Express

On the NX4 there are two user interfaces, depending on which license you bought - NaviSphere Express and NaviSphere Manager
Navi Express is kind of a light version, but for your purpose it does the same job.

What is confusing that in order to cater to customers unfamiliar with Navi and SAN management it tries to make it easy and calls raidgroups as disk pools and LUNs as virtual disks.

see:
http://corpusweb130.corp.emc.com/ns/common/post_install/Config_storage_FC_NaviExpress.htm

Alternatively - if you have an EMC engineer nearby - you can also ask him to download the new CSA with provisioning wizard and use that.

1 Attachment

8.6K Posts

March 20th, 2009 03:00

Since you've mentioned MS SQL - take a look at the Reference Architecture and Best Practices from Powerlink.

You can find White Papers at this location on Powerlink:

Home > Support > Technical Documentation and Advisories > White Papers > All White Papers       

Also - as you can see in the Reference Architecture you could actually configure your system with just the S-ATA hot spare since it can stand in for a failed SAS drive as well.
Of course in case of a failure your performance would drop the S-ATA level, but you would have more capacity

March 20th, 2009 05:00

Thanks, I guess I should have been even more explicit about what tools I was using - didn't realize there were two completely different interfaces & naming schemes depending on the license :)

I'm still having troubles getting things to work, and I fear it's probably a simple config issue that's holding me back. The system is not in production yet, so I have no problem in making config changes if that's necessary.

Status:
I can create disk pools and virtual disks just like the manual says. There's still no mention of host ID's - I see however that the manual names the disks "LUN XX" - is the XX the HostID, or is the virtual disk name just an arbitrary name?

If I run a nas_checkup command, a single test fails (Checking if Symapi data is present) and the following issues are noted:

Warnings:
-----
Storage System: Check system lun configuration
Symptom: The following system luns are misconfigured: 6, 7
Action : Contact EMC Customer Service and refer to EMC Knowledgebase
emc146016. Include this log with your support request.
-----

Errors:
-----
Control Station: Check if Symapi data is present
Symptom: * SL7E9084400002 d9, no storage API data available
* SL7E9084400002 d10, no storage API data available
Action : Contact EMC Customer Service and refer to EMC Knowledgebase
emc146016. Include this log with your support request.

Storage System: Check if auto assign are disabled for all luns
Symptom: Cannot obtain LUN 20, 17 auto assign information from
CLARiiON SL7E9084400002
Action :
1. Run "/nas/sbin/navicli -h A_SL7E9084400002 getlun -aa"
command to get LUN auto assign information.
2. If the command returns errors, use the errors to investigate
further.

Storage System: Check if auto trespass are disabled for all luns
Symptom: Cannot obtain LUN 20, 17 auto trespass information from
CLARiiON SL7E9084400002
Action : Contact EMC Customer Service and refer to EMC Knowledgebase
emc146016. Include this log with your support request.
-----

If I, in Navisphere, go to the connetions section, four connections are listed as Active with the following warning: "We cannot determine the server operating system for this connection. Settings for this connection may have been changed. To correctly configure this connection for access to virtual disks, click 'Help' at the top right-hand side of this page." I'm not sure if that's anything to worry about.

Finally, if I go to the Celerra Manager and click Rescan, I get an error message that simply states "Rescan failed." / "Storage system Rescan failed.". Otherwise, everything's green in the components view, events and everywhere else. If I do create disk setups using the templates, the disk group is automatically found by the Celerra Manager a couple of minutes after initialization is done, so it seems it's able to see them somehow.

Edited: Removed old events from before system install.

8.6K Posts

March 20th, 2009 08:00

Hi,

sorry its taking so long for what should be a simple process.

What DART version are you using (see nas_version) ?

I think the fastest way to get you going is to open a case with EMC support and ask them to walk you through using a Webex session
Just ask them to look at Primus emc192005

I can create disk pools and virtual disks just like > the manual says. There's still no mention of host
ID's - I see however that the manual names the disks "LUN XX" - is the XX the HostID, or is the virtual
disk name just an arbitrary name?


you're right - on Navi Express you cant specify the Host ID (HLU)

the way its supposed to work is that the Celerra starting from DART 5.6.39 is supposed to rename and renumber the LUNs after a rescan to the values it likes

Warnings:
-----
Storage System: Check system lun configuration
Symptom: The following system luns are misconfigured: 6, 7
Action : Contact EMC Customer Service and refer to
EMC Knowledgebase emc146016. Include this log with your support request.
-----


yes, that one shows the Celerra is seeing LUNs 6 and 7 - but it should renumber them to numbers higher than 16 for use as data LUNs

regards
Rainer

March 21st, 2009 09:00

What DART version are you using (see nas_version) ?


I'm using DART 5.6.40-3.

I think the fastest way to get you going is to open a
case with EMC support and ask them to walk you
through using a Webex session
Just ask them to look at Primus emc192005


Starting to sound like a good idea, I'll try and get in touch with support.

you're right - on Navi Express you cant specify the
Host ID (HLU)

the way its supposed to work is that the Celerra
starting from DART 5.6.39 is supposed to rename and
renumber the LUNs after a rescan to the values it
likes


So the root cause for me not being able to see them is probably that my rescan is failing, thereby not allowing the Celerra to rename them properly.

Warnings:
-----
Storage System: Check system lun configuration
Symptom: The following system luns are

misconfigured: 6, 7
Action : Contact EMC Customer Service and refer to
EMC Knowledgebase emc146016. Include this log with

your support request.
-----


yes, that one shows the Celerra is seeing LUNs 6 and
7 - but it should renumber them to numbers higher
than 16 for use as data LUNs


But this is reported even though I only have the system pool & LUNs defined, I don't suppose those should be renamed?

Edit: SR #28868396 created.

Thanks again,
Mark

674 Posts

March 26th, 2009 07:00

I don´t understand where this started from.

From a Celerra point of view Raid1 is only supported on the CX and CX3 Backends.

on the newer backends (CX4, AX4-5) Celerra is supporting Raid1/0 with 2 Disks instead, which is the same as Raid1 with 2 Disks.
Raid1/0 with more than 2 Disks is not supported for Celerra


Regards
Peter

March 27th, 2009 08:00

A RAID10 with two disks, is that not similar/identical to a RAID1 in regards of structure and performance characteristics?

So what you're saying is that if I, from Navisphere, create a "RAID1/0" disk pool with more than two disks, the Celerra won't support it? So in effect I can't create a 4-disk RAID10, effectively limiting write performance to that of a single RAID1? Or will it still stripe the file system across disk pools within the same performance profile, and thus give me equal performance to a "true" RAID 1/0?

Thanks!

March 27th, 2009 08:00

Hi deanr,

Thanks for replying.

I'm still awaiting an update on the SR case after I've posted my last comment.

1)
That sounds odd, through the sales process I was asked what config I wanted, for them to set it up from the factory. We started out wanting a 4 disk RAID10, but later changed it to a 2x2 disk RAID1, which is probably where things went wrong.

3)
nas_diskmark -mark -all gives the following error:
Error 5017: storage health check failed
SL7E9084400002 d9, no storage API data available
SL7E9084400002 d10, no storage API data available

Can these be remnants of the original secondary disk group that wasn't completely cleaned up, and thus haunting us now?

5 Practitioner

 • 

274.2K Posts

March 27th, 2009 08:00

Yes,

This storage check error is indicative of the fact that the LUNs in the old RAID group were unbound via NaviExpress without having removed them from the NAS configuration database via nas_disk -delete prior to the unbind.

Generally you can just do nas_disk -delete on those 2 disks to remove the old NAS disks and then rerun nas_diskmark -m -a to discover the new ones.

This case has escalated to me internally so our escalation engineer will be contacting you shortly to walk you thru this.

8.6K Posts

March 27th, 2009 10:00

A RAID10 with two disks, is that not similar/identical to a RAID1 in regards of structure and performance characteristics?


yes it is

I think the confusion is because on older Clariion there was a RAID1 (only 2 disks) and RAID1/0 (2,4,8,... disks) config option
These days you cant create a RAID1 - it will just create a RAID1/0 with two disks

So what you're saying is that if I, from Navisphere, create a "RAID1/0" disk pool with more than two disks, the Celerra won't support it?


correct

for Celerra we prefer to work with two-disk RAID1/0 LUNs and then use the Celerra AVM volume manager to stripe across multiple of them

So in effect I can't create a 4-disk RAID10, effectively limiting write performance to that of a single RAID1?
Or will it still stripe the file system across disk pools within the same performance profile, and thus give me equal performance to a "true" RAID 1/0?


correct - we just do the striping on the Celerra side - so you get the same performance but more flexibility
technically speaking we stripe within the members of a pool - not across pools
No Events found!

Top