May 3rd, 2010 12:00

Hello Froz,

I have identified Primus solution emc154151 which is an exact match to the issue you are having with Navisphere Express.  It indicates that somehow a third server gets added to a particular Storage Group, when only one is allowed.  Running the CLI command, 'naviseccli -h storagegroup -list' would provide you with your list of attached hosts and storagegroups and would allow you to verify whether you in fact have three servers within a storagegroup.  Then you could use the '-disconnecthost' switch within the storagegroup command to remove the host if that is the case.  Depending on the level of Flare code you are running, you may not need to use the '-allowaxintegration' switch.

Best Regards,

Brent

1 Attachment

4 Posts

May 4th, 2010 01:00

Hello Brent,

That is what the command reveals:

Storage Group Name:    esx2
Storage Group UID:     3A:C9:75:E4:22:AF:DD:11:89:88:00:60:16:28:7C:CE
HBA/SP Pairs:

  HBA UID                                          SP Name     SPPort
  -------                                          -------     ------
  20:00:00:1B:32:10:4C:8F:21:00:00:1B:32:10:4C:8F   SP A         1
  20:00:00:1B:32:10:4C:8F:21:00:00:1B:32:10:4C:8F   SP B         1
  20:00:00:1B:32:10:4C:8F:21:00:00:1B:32:10:4C:8F   SP A         0
  20:00:00:1B:32:10:4C:8F:21:00:00:1B:32:10:4C:8F   SP B         0
  20:01:00:1B:32:30:4C:8F:21:01:00:1B:32:30:4C:8F   SP A         1
  20:01:00:1B:32:30:4C:8F:21:01:00:1B:32:30:4C:8F   SP B         1
  20:01:00:1B:32:30:4C:8F:21:01:00:1B:32:30:4C:8F   SP A         0
  20:01:00:1B:32:30:4C:8F:21:01:00:1B:32:30:4C:8F   SP B         0
  20:00:00:1B:32:10:09:91:21:00:00:1B:32:10:09:91   SP A         0
  20:00:00:1B:32:10:09:91:21:00:00:1B:32:10:09:91   SP B         0
  20:00:00:1B:32:10:09:91:21:00:00:1B:32:10:09:91   SP A         1
  20:00:00:1B:32:10:09:91:21:00:00:1B:32:10:09:91   SP B         1
  20:00:00:E0:8B:0E:64:91:21:00:00:E0:8B:0E:64:91   SP B         1
  20:00:00:E0:8B:0E:64:91:21:00:00:E0:8B:0E:64:91   SP A         1
  20:00:00:E0:8B:0E:64:91:21:00:00:E0:8B:0E:64:91   SP B         0
  20:00:00:E0:8B:0E:64:91:21:00:00:E0:8B:0E:64:91   SP A         0
  20:01:00:1B:32:30:09:91:21:01:00:1B:32:30:09:91   SP A         0
  20:01:00:1B:32:30:09:91:21:01:00:1B:32:30:09:91   SP B         0
  20:01:00:1B:32:30:09:91:21:01:00:1B:32:30:09:91   SP A         1
  20:01:00:1B:32:30:09:91:21:01:00:1B:32:30:09:91   SP B         1

HLU/ALU Pairs:

  HLU Number     ALU Number
  ----------     ----------
    0               2
    1               4
    2               0
    3               1
    4               5
    5               3
    6               7
    7               8
    8               10
    9               11
    10              12
Shareable:             YES

Storage Group Name:    dwofpsnew
Storage Group UID:     26:ED:A4:12:DF:80:DD:11:89:89:00:60:16:15:7B:F3
HBA/SP Pairs:

  HBA UID                                          SP Name     SPPort
  -------                                          -------     ------
  20:00:00:1B:32:13:33:AD:21:00:00:1B:32:13:33:AD   SP A         1
  20:00:00:1B:32:13:33:AD:21:00:00:1B:32:13:33:AD   SP B         1
  20:00:00:1B:32:13:33:AD:21:00:00:1B:32:13:33:AD   SP A         0
  20:00:00:1B:32:13:33:AD:21:00:00:1B:32:13:33:AD   SP B         0
  20:01:00:1B:32:33:33:AD:21:01:00:1B:32:33:33:AD   SP A         1
  20:01:00:1B:32:33:33:AD:21:01:00:1B:32:33:33:AD   SP B         1
  20:01:00:1B:32:33:33:AD:21:01:00:1B:32:33:33:AD   SP A         0
  20:01:00:1B:32:33:33:AD:21:01:00:1B:32:33:33:AD   SP B         0

HLU/ALU Pairs:

  HLU Number     ALU Number
  ----------     ----------
    0               9
    1               6
Shareable:             YES

Storage Group Name:    dwodbnew
Storage Group UID:     08:C5:19:56:C5:6E:DD:11:89:87:00:60:16:15:7B:F3
HBA/SP Pairs:

  HBA UID                                          SP Name     SPPort
  -------                                          -------     ------
  20:00:00:1B:32:13:19:A6:21:00:00:1B:32:13:19:A6   SP A         0
  20:00:00:1B:32:13:19:A6:21:00:00:1B:32:13:19:A6   SP B         0
  20:00:00:1B:32:13:19:A6:21:00:00:1B:32:13:19:A6   SP A         1
  20:00:00:1B:32:13:19:A6:21:00:00:1B:32:13:19:A6   SP B         1
  20:01:00:1B:32:33:19:A6:21:01:00:1B:32:33:19:A6   SP A         0
  20:01:00:1B:32:33:19:A6:21:01:00:1B:32:33:19:A6   SP B         0
  20:01:00:1B:32:33:19:A6:21:01:00:1B:32:33:19:A6   SP A         1
  20:01:00:1B:32:33:19:A6:21:01:00:1B:32:33:19:A6   SP B         1

HLU/ALU Pairs:

  HLU Number     ALU Number
  ----------     ----------
    0               14
    1               13
Shareable:             YES

As far as I understand the output it tell me that I have three group. The first group is are 2 ESX hosts plus a VCB host.

What should I remove?

NOthing was done on the CLI.

Thanks in advance.

Froz

May 4th, 2010 12:00

Thanks for the response.  It looks like you are OK with your server-to-storage group configuration.  Can you tell me when your issue first started?  Any changes made prior to this occuring?  Is the symptom the same when you browse to each SP?

On another note, I have been reviewing the output you provided and was curious about the number of initiators you have zoned to your array from your ESX server 'esx2'.  Are you running Powerpath for ESX?  If not, you really are not adding any benefit with multipathing when you have a particular hba zoned to 2 ports per SP.  Powerpath would provide that loadbalancing and multipathing.  Also, as an FYI, the rule with the AX4 is that if you don't not have the Expansion Tier Enabler loaded, you can connect a maximum of 10 HBA's to an SP, and a maximum of 10 servers to a storage system.  Currently, you have 9 HBA's connected to each SP.  With the Enabler loaded, you can connect a maximum of 64 HBA's to an SP and a maximum of 64 servers to an array.

As far as your array management issue is concerned, if you haven't already, it would be good to gather SPCollects from each SP (naviseccli -h spcollect), then using the 'managefiles' command, gather them from the array and initiate a service call thru the EMC Support Center @ 800-782-4362.

Regards,

Brent

4 Operator

 • 

4.5K Posts

May 5th, 2010 12:00

Also, for ESX2 Storage Group the second server appears to have three HBAs - this might be overkill as Brent mentioned - you really only need two HBA's for redundency - this server have 12 paths to the array.

20:00:00:1B:32:10:09:91:21:00:00:1B:32:10:09:91   SP A         0
20:00:00:1B:32:10:09:91:21:00:00:1B:32:10:09:91   SP B         0
20:00:00:1B:32:10:09:91:21:00:00:1B:32:10:09:91   SP A         1
20:00:00:1B:32:10:09:91:21:00:00:1B:32:10:09:91   SP B         1


20:00:00:E0:8B:0E:64:91:21:00:00:E0:8B:0E:64:91   SP B         1
20:00:00:E0:8B:0E:64:91:21:00:00:E0:8B:0E:64:91   SP A         1
20:00:00:E0:8B:0E:64:91:21:00:00:E0:8B:0E:64:91   SP B         0
20:00:00:E0:8B:0E:64:91:21:00:00:E0:8B:0E:64:91   SP A         0


20:01:00:1B:32:30:09:91:21:01:00:1B:32:30:09:91   SP A         0
20:01:00:1B:32:30:09:91:21:01:00:1B:32:30:09:91   SP B         0
20:01:00:1B:32:30:09:91:21:01:00:1B:32:30:09:91   SP A         1
20:01:00:1B:32:30:09:91:21:01:00:1B:32:30:09:91   SP B         1

glen

4 Posts

May 10th, 2010 23:00

After removing the vcb proxy the Navisphere Express was accessible again. This drives the following question:

What AX4 Fibre Channel configuration (hardware config, software) do I need two run vSphere with two host and consolidated backup (3rd host)?

If this works how would be the best practice to assign the vdisk since we do not have any groups in Navisphere Express.

Your input is highly appreciated.

Rgds

251 Posts

May 11th, 2010 11:00

Out of curiosity what version of Flare code is on this array (as in which Flare 23 Patch?)

I have seen similar issues with NX4 arrays which I am currently working with engineering on

Regards

Gearoid

4 Posts

May 11th, 2010 12:00

Hi Gearoid,

it is FLARE-Operating-Environment: 02.23.050.5.705.

Rgds,

Frank

251 Posts

May 11th, 2010 12:00

Hmmm I didnt check at 705 code, but I know the issue does exist at 707,

When I get my test array back to life Ill double check

Cheers

Gearoid

No Events found!

Top