Yes, it is certainly possible to create a custom raid config and not just use the templates.
I can only guess that you might have gotten a little thing missing when creating the LUNs with NaviSphere - most likely you didnt set the Host ID larger than 16
You must have missed a step in creating the raid groups and luns. If Celerra presents you with templates, then it is not seeing those new luns.
Celerra can do custom raid groups, but it will only accept certain types of R1, R5, R6 configurations. For example, if you created a R5 with 3 disks, it will not see it. But I don't think you will have that issue with your R1 config.
Volumes and pools are greyed out because you don't have the Advanced celerra Manager. If you go under licenses there should be a check box to enable it. Then, you can create custom pools. But to create custom pools, Celerra must first see the luns.
I'm seeing a pattern in your posts indicating that I'm not using the right tools. In Rainer's link it mentions Navisphere Manager, and there's mention of RAID groups & LUNs.
I'm obviously using the Celerra Manager interface, but without the option of doing any configuration in here (we don't have the advanced license). For the Navisphere setup, I've installed the "Navisphere Service Taskbar", but that only seems for initial hardware registration & for installing new DAE etc. If I browse to the IP's of any of our SPs, I'm presented with the "Navisphere Express" web interface - this is where I've been setting up my disk configs.
The Navisphere Express interface works with Disk Pools & Virtual Disks, not RAID groups & LUNs. When creating virtual disks (which much be LUNs?), I can set no other options than the virtual disk name, so I'm not really sure where to set the Host ID.
Should I be using, and have I completely overlooked the "Navisphere Manager" software on my setup CD's?
sorry - I didnt realize you are using Navi Express
On the NX4 there are two user interfaces, depending on which license you bought - NaviSphere Express and NaviSphere Manager Navi Express is kind of a light version, but for your purpose it does the same job.
What is confusing that in order to cater to customers unfamiliar with Navi and SAN management it tries to make it easy and calls raidgroups as disk pools and LUNs as virtual disks.
Since you've mentioned MS SQL - take a look at the Reference Architecture and Best Practices from Powerlink.
You can find White Papers at this location on Powerlink:
Home > Support > Technical Documentation and Advisories > White Papers > All White Papers
Also - as you can see in the Reference Architecture you could actually configure your system with just the S-ATA hot spare since it can stand in for a failed SAS drive as well. Of course in case of a failure your performance would drop the S-ATA level, but you would have more capacity
Thanks, I guess I should have been even more explicit about what tools I was using - didn't realize there were two completely different interfaces & naming schemes depending on the license
I'm still having troubles getting things to work, and I fear it's probably a simple config issue that's holding me back. The system is not in production yet, so I have no problem in making config changes if that's necessary.
Status: I can create disk pools and virtual disks just like the manual says. There's still no mention of host ID's - I see however that the manual names the disks "LUN XX" - is the XX the HostID, or is the virtual disk name just an arbitrary name?
If I run a nas_checkup command, a single test fails (Checking if Symapi data is present) and the following issues are noted:
Warnings: ----- Storage System: Check system lun configuration Symptom: The following system luns are misconfigured: 6, 7 Action : Contact EMC Customer Service and refer to EMC Knowledgebase emc146016. Include this log with your support request. -----
Errors: ----- Control Station: Check if Symapi data is present Symptom: * SL7E9084400002 d9, no storage API data available * SL7E9084400002 d10, no storage API data available Action : Contact EMC Customer Service and refer to EMC Knowledgebase emc146016. Include this log with your support request.
Storage System: Check if auto assign are disabled for all luns Symptom: Cannot obtain LUN 20, 17 auto assign information from CLARiiON SL7E9084400002 Action : 1. Run "/nas/sbin/navicli -h A_SL7E9084400002 getlun -aa" command to get LUN auto assign information. 2. If the command returns errors, use the errors to investigate further.
Storage System: Check if auto trespass are disabled for all luns Symptom: Cannot obtain LUN 20, 17 auto trespass information from CLARiiON SL7E9084400002 Action : Contact EMC Customer Service and refer to EMC Knowledgebase emc146016. Include this log with your support request. -----
If I, in Navisphere, go to the connetions section, four connections are listed as Active with the following warning: "We cannot determine the server operating system for this connection. Settings for this connection may have been changed. To correctly configure this connection for access to virtual disks, click 'Help' at the top right-hand side of this page." I'm not sure if that's anything to worry about.
Finally, if I go to the Celerra Manager and click Rescan, I get an error message that simply states "Rescan failed." / "Storage system Rescan failed.". Otherwise, everything's green in the components view, events and everywhere else. If I do create disk setups using the templates, the disk group is automatically found by the Celerra Manager a couple of minutes after initialization is done, so it seems it's able to see them somehow.
Edited: Removed old events from before system install.
sorry its taking so long for what should be a simple process.
What DART version are you using (see nas_version) ?
I think the fastest way to get you going is to open a case with EMC support and ask them to walk you through using a Webex session Just ask them to look at Primus emc192005
I can create disk pools and virtual disks just like > the manual says. There's still no mention of host ID's - I see however that the manual names the disks "LUN XX" - is the XX the HostID, or is the virtual disk name just an arbitrary name?
you're right - on Navi Express you cant specify the Host ID (HLU)
the way its supposed to work is that the Celerra starting from DART 5.6.39 is supposed to rename and renumber the LUNs after a rescan to the values it likes
Warnings: ----- Storage System: Check system lun configuration Symptom: The following system luns are misconfigured: 6, 7 Action : Contact EMC Customer Service and refer to EMC Knowledgebase emc146016. Include this log with your support request. -----
yes, that one shows the Celerra is seeing LUNs 6 and 7 - but it should renumber them to numbers higher than 16 for use as data LUNs
What DART version are you using (see nas_version) ?
I'm using DART 5.6.40-3.
I think the fastest way to get you going is to open a case with EMC support and ask them to walk you through using a Webex session Just ask them to look at Primus emc192005
Starting to sound like a good idea, I'll try and get in touch with support.
you're right - on Navi Express you cant specify the Host ID (HLU)
the way its supposed to work is that the Celerra starting from DART 5.6.39 is supposed to rename and renumber the LUNs after a rescan to the values it likes
So the root cause for me not being able to see them is probably that my rescan is failing, thereby not allowing the Celerra to rename them properly.
Warnings: ----- Storage System: Check system lun configuration Symptom: The following system luns are
misconfigured: 6, 7
Action : Contact EMC Customer Service and refer to EMC Knowledgebase emc146016. Include this log with
your support request.
-----
yes, that one shows the Celerra is seeing LUNs 6 and 7 - but it should renumber them to numbers higher than 16 for use as data LUNs
But this is reported even though I only have the system pool & LUNs defined, I don't suppose those should be renamed?
From a Celerra point of view Raid1 is only supported on the CX and CX3 Backends.
on the newer backends (CX4, AX4-5) Celerra is supporting Raid1/0 with 2 Disks instead, which is the same as Raid1 with 2 Disks. Raid1/0 with more than 2 Disks is not supported for Celerra
A RAID10 with two disks, is that not similar/identical to a RAID1 in regards of structure and performance characteristics?
So what you're saying is that if I, from Navisphere, create a "RAID1/0" disk pool with more than two disks, the Celerra won't support it? So in effect I can't create a 4-disk RAID10, effectively limiting write performance to that of a single RAID1? Or will it still stripe the file system across disk pools within the same performance profile, and thus give me equal performance to a "true" RAID 1/0?
I'm still awaiting an update on the SR case after I've posted my last comment.
1) That sounds odd, through the sales process I was asked what config I wanted, for them to set it up from the factory. We started out wanting a 4 disk RAID10, but later changed it to a 2x2 disk RAID1, which is probably where things went wrong.
3) nas_diskmark -mark -all gives the following error: Error 5017: storage health check failed SL7E9084400002 d9, no storage API data available SL7E9084400002 d10, no storage API data available
Can these be remnants of the original secondary disk group that wasn't completely cleaned up, and thus haunting us now?
A RAID10 with two disks, is that not similar/identical to a RAID1 in regards of structure and performance characteristics?
yes it is
I think the confusion is because on older Clariion there was a RAID1 (only 2 disks) and RAID1/0 (2,4,8,... disks) config option These days you cant create a RAID1 - it will just create a RAID1/0 with two disks
So what you're saying is that if I, from Navisphere, create a "RAID1/0" disk pool with more than two disks, the Celerra won't support it?
correct
for Celerra we prefer to work with two-disk RAID1/0 LUNs and then use the Celerra AVM volume manager to stripe across multiple of them
So in effect I can't create a 4-disk RAID10, effectively limiting write performance to that of a single RAID1? Or will it still stripe the file system across disk pools within the same performance profile, and thus give me equal performance to a "true" RAID 1/0?
correct - we just do the striping on the Celerra side - so you get the same performance but more flexibility technically speaking we stripe within the members of a pool - not across pools
sorry - I should have said I mean Celerra system-defined storage pool
The NaviExpress language of also using the term pool makes it confusing
all the RAID1/0 LUNs that the Celerra see's will get associated as dvols with the clarsas_r10 Celerra system-defined pool
From there - if you use AVM then it will try to find 4,3,2,1 dvols to stripe see the attached manual on page 27
so in your case - if you assign two 1+1 RAID1/0 to the Celerra than AVM will stripe between them and you automatically get the I/O performance of all four disks
Of you can use manual volume management and stripe to your liking manually
After deleting the missing the unbound disks from the nas through nas_disk, I was able to create new RAID groups and have them discovered by the Celerra. This also cleared the errors I was getting through nas_checkup.
So the procedure to delete a disk pool / LUN is to first delete the disk through nas_disk, and then unbind it from Navisphere afterwards I assume.
Rainer_EMC
4 Operator
•
8.6K Posts
0
March 19th, 2009 09:00
welcome to the forum.
Yes, it is certainly possible to create a custom raid config and not just use the templates.
I can only guess that you might have gotten a little thing missing when creating the LUNs with NaviSphere - most likely you didnt set the Host ID larger than 16
Please take a look at Powerlink
Home > Support > Product and Diagnostic Tools > Celerra Tools > NX4, NX4FC
then "Step 5: Configure for Production" and "Perform Common Post-CSA tasks" to "Configure additional storage for your integrated system"
http://corpusweb130.corp.emc.com/ns/common/post_install/Config_storage_FC_NaviManager.htm
There you'll find step-by-step instructions
alias23122
139 Posts
0
March 19th, 2009 11:00
Celerra can do custom raid groups, but it will only accept certain types of R1, R5, R6 configurations. For example, if you created a R5 with 3 disks, it will not see it. But I don't think you will have that issue with your R1 config.
Volumes and pools are greyed out because you don't have the Advanced celerra Manager. If you go under licenses there should be a check box to enable it. Then, you can create custom pools. But to create custom pools, Celerra must first see the luns.
nandas
4 Operator
•
1.5K Posts
0
March 19th, 2009 12:00
Welcome to the EMC Support forums. Rainer had already provided you the required details and the link to find step by step process.
Also - you may find few more old forum posts on this related subject - one of the recent one is as follows -
http://forums.emc.com/forums/thread.jspa?threadID=111450&start=0&tstart=20
I am sure you 'll have a good time with your new NX4. Wish to have your valued contribution to the forums as well.
Regards,
Sandip
MarkSRasmussen
15 Posts
0
March 20th, 2009 01:00
I'm seeing a pattern in your posts indicating that I'm not using the right tools. In Rainer's link it mentions Navisphere Manager, and there's mention of RAID groups & LUNs.
I'm obviously using the Celerra Manager interface, but without the option of doing any configuration in here (we don't have the advanced license). For the Navisphere setup, I've installed the "Navisphere Service Taskbar", but that only seems for initial hardware registration & for installing new DAE etc. If I browse to the IP's of any of our SPs, I'm presented with the "Navisphere Express" web interface - this is where I've been setting up my disk configs.
The Navisphere Express interface works with Disk Pools & Virtual Disks, not RAID groups & LUNs. When creating virtual disks (which much be LUNs?), I can set no other options than the virtual disk name, so I'm not really sure where to set the Host ID.
Should I be using, and have I completely overlooked the "Navisphere Manager" software on my setup CD's?
Rainer_EMC
4 Operator
•
8.6K Posts
0
March 20th, 2009 02:00
sorry - I didnt realize you are using Navi Express
On the NX4 there are two user interfaces, depending on which license you bought - NaviSphere Express and NaviSphere Manager
Navi Express is kind of a light version, but for your purpose it does the same job.
What is confusing that in order to cater to customers unfamiliar with Navi and SAN management it tries to make it easy and calls raidgroups as disk pools and LUNs as virtual disks.
see:
http://corpusweb130.corp.emc.com/ns/common/post_install/Config_storage_FC_NaviExpress.htm
Alternatively - if you have an EMC engineer nearby - you can also ask him to download the new CSA with provisioning wizard and use that.
1 Attachment
NX4_Navi_Express_RAID_config.pdf
Rainer_EMC
4 Operator
•
8.6K Posts
0
March 20th, 2009 03:00
Since you've mentioned MS SQL - take a look at the Reference Architecture and Best Practices from Powerlink.
You can find White Papers at this location on Powerlink:
Home > Support > Technical Documentation and Advisories > White Papers > All White Papers
Also - as you can see in the Reference Architecture you could actually configure your system with just the S-ATA hot spare since it can stand in for a failed SAS drive as well.
Of course in case of a failure your performance would drop the S-ATA level, but you would have more capacity
MarkSRasmussen
15 Posts
0
March 20th, 2009 05:00
I'm still having troubles getting things to work, and I fear it's probably a simple config issue that's holding me back. The system is not in production yet, so I have no problem in making config changes if that's necessary.
Status:
I can create disk pools and virtual disks just like the manual says. There's still no mention of host ID's - I see however that the manual names the disks "LUN XX" - is the XX the HostID, or is the virtual disk name just an arbitrary name?
If I run a nas_checkup command, a single test fails (Checking if Symapi data is present) and the following issues are noted:
Warnings:
-----
Storage System: Check system lun configuration
Symptom: The following system luns are misconfigured: 6, 7
Action : Contact EMC Customer Service and refer to EMC Knowledgebase
emc146016. Include this log with your support request.
-----
Errors:
-----
Control Station: Check if Symapi data is present
Symptom: * SL7E9084400002 d9, no storage API data available
* SL7E9084400002 d10, no storage API data available
Action : Contact EMC Customer Service and refer to EMC Knowledgebase
emc146016. Include this log with your support request.
Storage System: Check if auto assign are disabled for all luns
Symptom: Cannot obtain LUN 20, 17 auto assign information from
CLARiiON SL7E9084400002
Action :
1. Run "/nas/sbin/navicli -h A_SL7E9084400002 getlun
command to get LUN auto assign information.
2. If the command returns errors, use the errors to investigate
further.
Storage System: Check if auto trespass are disabled for all luns
Symptom: Cannot obtain LUN 20, 17 auto trespass information from
CLARiiON SL7E9084400002
Action : Contact EMC Customer Service and refer to EMC Knowledgebase
emc146016. Include this log with your support request.
-----
If I, in Navisphere, go to the connetions section, four connections are listed as Active with the following warning: "We cannot determine the server operating system for this connection. Settings for this connection may have been changed. To correctly configure this connection for access to virtual disks, click 'Help' at the top right-hand side of this page." I'm not sure if that's anything to worry about.
Finally, if I go to the Celerra Manager and click Rescan, I get an error message that simply states "Rescan failed." / "Storage system Rescan failed.". Otherwise, everything's green in the components view, events and everywhere else. If I do create disk setups using the templates, the disk group is automatically found by the Celerra Manager a couple of minutes after initialization is done, so it seems it's able to see them somehow.
Edited: Removed old events from before system install.
Rainer_EMC
4 Operator
•
8.6K Posts
0
March 20th, 2009 08:00
sorry its taking so long for what should be a simple process.
What DART version are you using (see nas_version) ?
I think the fastest way to get you going is to open a case with EMC support and ask them to walk you through using a Webex session
Just ask them to look at Primus emc192005
ID's - I see however that the manual names the disks "LUN XX" - is the XX the HostID, or is the virtual
disk name just an arbitrary name?
you're right - on Navi Express you cant specify the Host ID (HLU)
the way its supposed to work is that the Celerra starting from DART 5.6.39 is supposed to rename and renumber the LUNs after a rescan to the values it likes
-----
Storage System: Check system lun configuration
Symptom: The following system luns are misconfigured: 6, 7
Action : Contact EMC Customer Service and refer to
EMC Knowledgebase emc146016. Include this log with your support request.
-----
yes, that one shows the Celerra is seeing LUNs 6 and 7 - but it should renumber them to numbers higher than 16 for use as data LUNs
regards
Rainer
MarkSRasmussen
15 Posts
0
March 21st, 2009 09:00
I'm using DART 5.6.40-3.
case with EMC support and ask them to walk you
through using a Webex session
Just ask them to look at Primus emc192005
Starting to sound like a good idea, I'll try and get in touch with support.
Host ID (HLU)
the way its supposed to work is that the Celerra
starting from DART 5.6.39 is supposed to rename and
renumber the LUNs after a rescan to the values it
likes
So the root cause for me not being able to see them is probably that my rescan is failing, thereby not allowing the Celerra to rename them properly.
-----
Storage System: Check system lun configuration
Symptom: The following system luns are
misconfigured: 6, 7
EMC Knowledgebase emc146016. Include this log with
your support request.
yes, that one shows the Celerra is seeing LUNs 6 and
7 - but it should renumber them to numbers higher
than 16 for use as data LUNs
But this is reported even though I only have the system pool & LUNs defined, I don't suppose those should be renamed?
Edit: SR #28868396 created.
Thanks again,
Mark
Peter_EMC
674 Posts
0
March 26th, 2009 07:00
From a Celerra point of view Raid1 is only supported on the CX and CX3 Backends.
on the newer backends (CX4, AX4-5) Celerra is supporting Raid1/0 with 2 Disks instead, which is the same as Raid1 with 2 Disks.
Raid1/0 with more than 2 Disks is not supported for Celerra
Regards
Peter
MarkSRasmussen
15 Posts
0
March 27th, 2009 08:00
So what you're saying is that if I, from Navisphere, create a "RAID1/0" disk pool with more than two disks, the Celerra won't support it? So in effect I can't create a 4-disk RAID10, effectively limiting write performance to that of a single RAID1? Or will it still stripe the file system across disk pools within the same performance profile, and thus give me equal performance to a "true" RAID 1/0?
Thanks!
MarkSRasmussen
15 Posts
0
March 27th, 2009 08:00
Thanks for replying.
I'm still awaiting an update on the SR case after I've posted my last comment.
1)
That sounds odd, through the sales process I was asked what config I wanted, for them to set it up from the factory. We started out wanting a 4 disk RAID10, but later changed it to a 2x2 disk RAID1, which is probably where things went wrong.
3)
nas_diskmark -mark -all gives the following error:
Error 5017: storage health check failed
SL7E9084400002 d9, no storage API data available
SL7E9084400002 d10, no storage API data available
Can these be remnants of the original secondary disk group that wasn't completely cleaned up, and thus haunting us now?
Rainer_EMC
4 Operator
•
8.6K Posts
0
March 27th, 2009 10:00
yes it is
I think the confusion is because on older Clariion there was a RAID1 (only 2 disks) and RAID1/0 (2,4,8,... disks) config option
These days you cant create a RAID1 - it will just create a RAID1/0 with two disks
correct
for Celerra we prefer to work with two-disk RAID1/0 LUNs and then use the Celerra AVM volume manager to stripe across multiple of them
Or will it still stripe the file system across disk pools within the same performance profile, and thus give me equal performance to a "true" RAID 1/0?
correct - we just do the striping on the Celerra side - so you get the same performance but more flexibility
technically speaking we stripe within the members of a pool - not across pools
Rainer_EMC
4 Operator
•
8.6K Posts
0
March 27th, 2009 11:00
The NaviExpress language of also using the term pool makes it confusing
all the RAID1/0 LUNs that the Celerra see's will get associated as dvols with the clarsas_r10 Celerra system-defined pool
From there - if you use AVM then it will try to find 4,3,2,1 dvols to stripe
see the attached manual on page 27
so in your case - if you assign two 1+1 RAID1/0 to the Celerra than AVM will stripe between them and you automatically get the I/O performance of all four disks
Of you can use manual volume management and stripe to your liking manually
1 Attachment
MgVolFSA.pdf
MarkSRasmussen
15 Posts
0
March 27th, 2009 11:00
After deleting the missing the unbound disks from the nas through nas_disk, I was able to create new RAID groups and have them discovered by the Celerra. This also cleared the errors I was getting through nas_checkup.
So the procedure to delete a disk pool / LUN is to first delete the disk through nas_disk, and then unbind it from Navisphere afterwards I assume.