Highlighted
duhaas1
3 Argentium

Multiple Storage Groups, same tdev, different masking view...

Jump to solution

I'm looking to do the following:

Storage Group A - TDEV A

Storage Group B - TDEV A

Masking View A::IG-A::PG-A::SG-A

Masking View B::IG-B::PG-B::SG-B

The only thing unique about the masking views are the devices inside the storage groups even those they are two different storage groups.  Each initiator group is a group of VMware hosts and the device/devices are VMFS volumes.

Right now my masking views look like:

S4ZLRnG.png

The top and the bottom views are the ones i'm looking at, I'm using the same storage group right now, and would prefer to use a different storage group with the top one but with the same devices inside it as the bottom one.

It will give me the ability that as I complete the migration and prepare to enable replication using recoverpoint, I can remove the device from the top view that doesnt have its ports exposed to recoverpoint, essentially eliminating a message like this from recoverpoint:

rVyMail.png

The fact that I used the same storage group for each VMware cluster doesnt allow me to remove the devices from the top group that I'm looking to retire, hence those ports continue to be tagged to the device and I cant seem to actually unmap those ports without unisphere barking, so I'm wondering if my multiple storage group approach would work?

Labels (1)
0 Kudos
1 Solution

Accepted Solutions

Re: Multiple Storage Groups, same tdev, different masking view...

Jump to solution

Hi Duhaas

Yes, it is possible to add an individual TDEV to multiple storage groups and unique MV's as you are suggesting, providing that the TDEV is not assigned to an SG in a FAST VP Policy - in this case the TDEV can only belong to a single SG in a FAST VP policy.

View solution in original post

0 Kudos
7 Replies

Re: Multiple Storage Groups, same tdev, different masking view...

Jump to solution

Hi Duhaas

Yes, it is possible to add an individual TDEV to multiple storage groups and unique MV's as you are suggesting, providing that the TDEV is not assigned to an SG in a FAST VP Policy - in this case the TDEV can only belong to a single SG in a FAST VP policy.

View solution in original post

0 Kudos
duhaas1
3 Argentium

Re: Multiple Storage Groups, same tdev, different masking view...

Jump to solution

Thanks as always David, I just completed the work and seems to be working as I expected and you explained.  One point of clarification.

While one dev cant be in two storage groups each with a FAST policy associated with it, that one dev can still be in one group that is associated with a policy and one that is not:

Lwn3d2O.png

and

z2e3qjd.png

The two storage groups in each of the view have the same devices in them.  The group on the bottom has a fast policy associated with it.  The group on the top does not.

My understanding is that any host that has a VM associated with it in the top masking view then will only ever place its VM on the bound tier for that device.

that sound accurate?

0 Kudos
KW160
3 Argentium

Re: Multiple Storage Groups, same tdev, different masking view...

Jump to solution

No, once the TDEV has been associated with a FAST policy all the data in that TDEV will be placed according to the policy. This is regardless of which SG the data arrived through.

Re: Multiple Storage Groups, same tdev, different masking view...

Jump to solution

Glad to be of assistance.

KW160 is correct, while a device may only be allowed to belong to one fast policy despite being assigned to 2 SG's, the device will have the VP policy and the associated tiering characteristics applied as per the assigned policy despite writes coming from hosts in a nonFAST MV/SG.

marvin_keys
2 Bronze

Re: Multiple Storage Groups, same tdev, different masking view...

Jump to solution

Hello, we have the same configuration that the original poster described but now we want to delete one of the Masking Views and unmap the volumes from the Port Group, but do not want the volumes to be taken offline while we do this as they are in use by the hosts in the other view (with a different PG). 

We know in Vmax "gen-2" environments you had to make the volumes NOT_READY before you could unmap them.  We tested this on our Vmax3-400k and it worked fine (delete the MV with "-unmap") and did not require us to make the volume NOT_READY.  Does anyone know if this new ability is documented somewhere, or is it an "undocumented feature"? 

Thanks.

0 Kudos
KW160
3 Argentium

Re: Multiple Storage Groups, same tdev, different masking view...

Jump to solution

This specific functionality is the same on all VMAX platforms. You've always been able to use the -unmap flag when deleting a masking view without explicitly marking the device Not_Ready.

This is not new with VMAX3/AF. The only time you needed to mark a device (or a single path) as Not_Ready on older generation VMAXes is if you were manually unmapping with symconfigure. On VMAX3 manual mapping and unmapping has been disabled for ports with masking (ACLX) enabled, so it is rarely used in Open System environments.

0 Kudos
dynamox
6 Thallium

Re: Multiple Storage Groups, same tdev, different masking view...

Jump to solution

and to add , to do what Marvin describes you would not want to make a volume not ready, you would actually make it WD on those specific FAs ..and only then you would unmap it. It would continue to be available to hosts from the other FAs.

0 Kudos