Yes, it is possible to add an individual TDEV to multiple storage groups and unique MV's as you are suggesting, providing that the TDEV is not assigned to an SG in a FAST VP Policy - in this case the TDEV can only belong to a single SG in a FAST VP policy.
Thanks as always David, I just completed the work and seems to be working as I expected and you explained. One point of clarification.
While one dev cant be in two storage groups each with a FAST policy associated with it, that one dev can still be in one group that is associated with a policy and one that is not:
and
The two storage groups in each of the view have the same devices in them. The group on the bottom has a fast policy associated with it. The group on the top does not.
My understanding is that any host that has a VM associated with it in the top masking view then will only ever place its VM on the bound tier for that device.
No, once the TDEV has been associated with a FAST policy all the data in that TDEV will be placed according to the policy. This is regardless of which SG the data arrived through.
KW160 is correct, while a device may only be allowed to belong to one fast policy despite being assigned to 2 SG's, the device will have the VP policy and the associated tiering characteristics applied as per the assigned policy despite writes coming from hosts in a nonFAST MV/SG.
Hello, we have the same configuration that the original poster described but now we want to delete one of the Masking Views and unmap the volumes from the Port Group, but do not want the volumes to be taken offline while we do this as they are in use by the hosts in the other view (with a different PG).
We know in Vmax "gen-2" environments you had to make the volumes NOT_READY before you could unmap them. We tested this on our Vmax3-400k and it worked fine (delete the MV with "-unmap") and did not require us to make the volume NOT_READY. Does anyone know if this new ability is documented somewhere, or is it an "undocumented feature"?
This specific functionality is the same on all VMAX platforms. You've always been able to use the -unmap flag when deleting a masking view without explicitly marking the device Not_Ready.
This is not new with VMAX3/AF. The only time you needed to mark a device (or a single path) as Not_Ready on older generation VMAXes is if you were manually unmapping with symconfigure. On VMAX3 manual mapping and unmapping has been disabled for ports with masking (ACLX) enabled, so it is rarely used in Open System environments.
and to add , to do what Marvin describes you would not want to make a volume not ready, you would actually make it WD on those specific FAs ..and only then you would unmap it. It would continue to be available to hosts from the other FAs.
EMCProvenSoluti
1 Rookie
•
15 Posts
1
October 16th, 2015 02:00
Hi Duhaas
Yes, it is possible to add an individual TDEV to multiple storage groups and unique MV's as you are suggesting, providing that the TDEV is not assigned to an SG in a FAST VP Policy - in this case the TDEV can only belong to a single SG in a FAST VP policy.
duhaas1
2 Intern
•
227 Posts
0
October 17th, 2015 05:00
Thanks as always David, I just completed the work and seems to be working as I expected and you explained. One point of clarification.
While one dev cant be in two storage groups each with a FAST policy associated with it, that one dev can still be in one group that is associated with a policy and one that is not:
and
The two storage groups in each of the view have the same devices in them. The group on the bottom has a fast policy associated with it. The group on the top does not.
My understanding is that any host that has a VM associated with it in the top masking view then will only ever place its VM on the bound tier for that device.
that sound accurate?
KW160
121 Posts
1
October 17th, 2015 10:00
No, once the TDEV has been associated with a FAST policy all the data in that TDEV will be placed according to the policy. This is regardless of which SG the data arrived through.
EMCProvenSoluti
1 Rookie
•
15 Posts
1
October 17th, 2015 12:00
Glad to be of assistance.
KW160 is correct, while a device may only be allowed to belong to one fast policy despite being assigned to 2 SG's, the device will have the VP policy and the associated tiering characteristics applied as per the assigned policy despite writes coming from hosts in a nonFAST MV/SG.
marvin_keys
27 Posts
0
March 13th, 2017 11:00
Hello, we have the same configuration that the original poster described but now we want to delete one of the Masking Views and unmap the volumes from the Port Group, but do not want the volumes to be taken offline while we do this as they are in use by the hosts in the other view (with a different PG).
We know in Vmax "gen-2" environments you had to make the volumes NOT_READY before you could unmap them. We tested this on our Vmax3-400k and it worked fine (delete the MV with "-unmap") and did not require us to make the volume NOT_READY. Does anyone know if this new ability is documented somewhere, or is it an "undocumented feature"?
Thanks.
KW160
121 Posts
0
March 13th, 2017 16:00
This specific functionality is the same on all VMAX platforms. You've always been able to use the -unmap flag when deleting a masking view without explicitly marking the device Not_Ready.
This is not new with VMAX3/AF. The only time you needed to mark a device (or a single path) as Not_Ready on older generation VMAXes is if you were manually unmapping with symconfigure. On VMAX3 manual mapping and unmapping has been disabled for ports with masking (ACLX) enabled, so it is rarely used in Open System environments.
dynamox
9 Legend
•
20.4K Posts
0
March 13th, 2017 20:00
and to add , to do what Marvin describes you would not want to make a volume not ready, you would actually make it WD on those specific FAs ..and only then you would unmap it. It would continue to be available to hosts from the other FAs.