Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

5910

March 1st, 2016 13:00

Cascaded initiator groups in XtremIO?

We have the need to isolate select volumes from certain hosts in a cluster.  Specifically, with AIX LVM in a cluster, while we allow HOST-1 and HOST-2 to see the same "shared" volumes (as the same HLU!), we need to insure that HOST-1 does not see HOST-2's boot volume, and vice versa. 

We can create cascaded groups in Vmax to do this.  We can create separate Storage Views in Vplex to do this.  Unfortunately we do not have the option to put an initiator name or WWN in multiple initiator groups in XIO.  So far all we have been able to do is create separate initiator groups for each host in the cluster, but this creates a maintenance issue to insure that every host gets the same volumes added, and mapped to the host as the same HLU.  Does anyone have a better solution to this?  Thanks.

64 Posts

March 1st, 2016 18:00

Use tags!

Create the initiator groups for each host, and then tag multiple initiator groups with the same tag.

You can then either right-click on the tag name under initiator groups, select Create/Modify Mappings, and then select the LUNs to map.  Or you can select the LUNs, select Create/Modify Mappings, and then search for the tag name to find the IG's with that tag.

1 Rookie

 • 

20.4K Posts

March 1st, 2016 13:00

+1, i am doing the same thing with XenServers that boot off XtremIO (separate init groups for each host and presenting shared volumes to each init group)

27 Posts

March 30th, 2016 07:00

Scott,

How do you do this with command line?  I can create Volumes and tags, and tag volumes with tags, but I don't see any options in the map-lun command to connect a volume tag to an initiator tag.  We need to be able to show all of our commands in Change Controls and you can't document a right-click in a GUI for Change Control Reviewers

5 Practitioner

 • 

274.2K Posts

April 1st, 2016 09:00

Add Volume to Initiator

map-lun vol-id="XI01-Server-01" lun=0 ig-id="HOST1"

map-lun vol-id="XI01-Server-01" lun=0 ig-id="HOST2"

727 Posts

April 1st, 2016 11:00

Mapping the LUNs to the Initiator Group using tags in CLI is not supported today. This is in our backlog right now (no timeframes of implementation yet).

14 Posts

April 15th, 2016 11:00

Is there a way via CLI to place all the IG's that are part of a tag into a variable to then perform the mapping via CLI against that? It looks like that might not be easy, but I am curious. A show-initiator-groups doesn't give you much in terms of tag data, you have to show each IG individually via show-initiator-group ig-id=# to get the tag information for that IG so I'm not sure if building this variable would be easy or not?

If you have enough HBA's I have heard of folks in the VNX world doing another work around. Mostly UCS shops or places with converged adapters where its easy to carve off 4 HBAs. The first 2 HBA's can be the HOSTIG holding the boot LUN while the other 2 HBA's can be a part of the SHAREDIG that all hosts are in. On my VNX's it is all single storage groups and its managed via good documentation and naviseccli scripts to keep consistent HLU's.

1 Rookie

 • 

20.4K Posts

April 18th, 2016 10:00

vogie563 wrote:

If you have enough HBA's I have heard of folks in the VNX world doing another work around. Mostly UCS shops or places with converged adapters where its easy to carve off 4 HBAs. The first 2 HBA's can be the HOSTIG holding the boot LUN while the other 2 HBA's can be a part of the SHAREDIG that all hosts are in. On my VNX's it is all single storage groups and its managed via good documentation and naviseccli scripts to keep consistent HLU's.

for big shops i would be concerned with the initiators /per cluster limitation.

727 Posts

April 19th, 2016 05:00

This is something that we are looking at. What is the number of initiators that make sense for the people on this forum? I am trying to validate our roadmap plans.

1 Rookie

 • 

20.4K Posts

April 19th, 2016 17:00

if we go with minimal 4 paths per host, then it's only 256 systems. Shops that still do a lot of physical systems (we do, many RACs systems) could get close, especially if they go to more than 8 paths for large RAC systems.

64 Posts

May 2nd, 2016 12:00

Either option will work, and we have customers doing each.

I would normal recommend option b (one IG per host), simply because it allows you to assign a volume to a single host. Even if it's not required today, odds are that at some stage in the future you will want to - and if you've gone with option a then it's very difficult to get there, and requires an outage to at least some paths (if not hosts) whilst you make the change.

We've certainly had customers that have gone with option a, and regretted it at a later stage due to this very reason...

Scott

27 Posts

May 2nd, 2016 12:00

It seems like both options have their downsides ...  With Option A you can't assign devices to one host and exclude it from the others.  But with Option B, when assigning new volumes you could miss a host or get your host LUNs out of alignment across the cluster.  Which is the lesser of the evils ?

27 Posts

May 2nd, 2016 12:00

Since we can't use tags for mapping volumes to initiators in CLI today, when dealing with a cluster of hosts what is the "Best Practice" for mapping volumes to initiators? 

A). Create a single initiator group with all host WWNs then tag/map it in the GUI? 

B). Create individual initiator groups, one per host, then individually tag/map it in the GUI?

C). Something else?

1 Rookie

 • 

20.4K Posts

May 9th, 2016 08:00

I am not too crazy about option A, let's say i need to decommission a host. I would need to go and monkey around with production initiator group, remove the WWNs for that specific host.  I would rather remove volumes from the initiator that is being decomm'ed. Option B looks more attractive to me and unless you have multiple HBAs like with UCS, you are doing the same thing with VNX, where you add one LUN to multiple storage groups (boot from SAN scenario) .

No Events found!

Top