Mack, You mean a server that is assigned to a storage group from each array, not a storage group that spans arrays? The boundary of a storage group is the individual array and cannot span arrays.
I don't know if there is any official recommendation from EMC on this, but I have production systems that are using storage from multiple arrays and have never had a problem with it. The main drawbacks to the configuration is that it makes the environment a little more complex to manage. The impact of maintenance outages, FLARE updates, and Navisphere agents updates requires more planning and consideration than on a system attached to a single array since you have to ensure that your HBA driver, host agent, and PowerPath levels are compatible with the revisions of FLARE on more than one array now.
The zoning will be the same, single initiator multiple target (or single target), it will just be for a different array for the host in question. I try to avoid this configuration if at all possible to keep things as simple as I can, but there have been a few situations that have required me to implement a system with storage on two arrays. It has never caused any performance or stability issues for the year now that that configuration has been in place.
yeah we also have multiple arrays on single host. we do try to use one array where possible but it isnt feasible in all cases. there arent any observed issues except what Aran mentioned about code revisions.
if you are using volume groups on hosts, i would advise that a single volume group be limited to luns on single array as much possible. if the arrays have difference performance in terms of path speeds, cache, load, then your volume groups will not be affected by the difference this way.
theoritically, having multiple arrays as part of design would be desirable where array level redundancy is required. though i am not sure if this is actively considered anywhere. there are setups where multiple arrays are connected to a single host for simple host based multisite replication
Thanks guys, though my choice of wording was incorrect you got my meaning. I am thinking that I will try to avoid doing this for the same reasons you have mentioned. Thanks for the responses. mack
though i am not sure if this is actively considered anywhere
yess we use it.. its to save the costs of BCV, SRDF and mirrorview softwares :-$. We present luns from two different arrays and mirror them on the server(cluster nodes) thus giving us redundancy as well as a kind of DR since both the arrays are geographically dispersed... This works fine except for some some odd glitches in the cross site Fibre connectivity... You should be careful not to assign LUN's to the server(single node) from the remote site array.. else glitches in the Cross site connectivity can lead to issues...
how do you mirrir on the host side ? I can imagine you could use LVM on *nix but what about windows ? using Veritas your windows boxes. i am kind of curious how you can keep two consistent copies of your data without impacting performance. I know on Hpux i could add luns from another site and add them to lvm as mirror volumes but then it would impact my host performance as it would have to complete write to both mirrors before it gets aknowledge as complete write ..and if you have considerable amount of latency between the sites i could see that being a problem. How do you do it ?
dynamox
9 Legend
•
20.4K Posts
0
June 29th, 2007 13:00
AranH1
2.2K Posts
0
June 29th, 2007 13:00
You mean a server that is assigned to a storage group from each array, not a storage group that spans arrays? The boundary of a storage group is the individual array and cannot span arrays.
I don't know if there is any official recommendation from EMC on this, but I have production systems that are using storage from multiple arrays and have never had a problem with it. The main drawbacks to the configuration is that it makes the environment a little more complex to manage. The impact of maintenance outages, FLARE updates, and Navisphere agents updates requires more planning and consideration than on a system attached to a single array since you have to ensure that your HBA driver, host agent, and PowerPath levels are compatible with the revisions of FLARE on more than one array now.
The zoning will be the same, single initiator multiple target (or single target), it will just be for a different array for the host in question. I try to avoid this configuration if at all possible to keep things as simple as I can, but there have been a few situations that have required me to implement a system with storage on two arrays. It has never caused any performance or stability issues for the year now that that configuration has been in place.
Aran
Kiran3
410 Posts
0
July 1st, 2007 02:00
if you are using volume groups on hosts, i would advise that a single volume group be limited to luns on single array as much possible. if the arrays have difference performance in terms of path speeds, cache, load, then your volume groups will not be affected by the difference this way.
theoritically, having multiple arrays as part of design would be desirable where array level redundancy is required. though i am not sure if this is actively considered anywhere. there are setups where multiple arrays are connected to a single host for simple host based multisite replication
mack-xwWos
21 Posts
0
July 2nd, 2007 10:00
Navin-Swn_9
66 Posts
0
July 3rd, 2007 03:00
yess we use it.. its to save the costs of BCV, SRDF and mirrorview softwares :-$. We present luns from two different arrays and mirror them on the server(cluster nodes) thus giving us redundancy as well as a kind of DR since both the arrays are geographically dispersed... This works fine except for some some odd glitches in the cross site Fibre connectivity... You should be careful not to assign LUN's to the server(single node) from the remote site array.. else glitches in the Cross site connectivity can lead to issues...
Message was edited by:
Navin
dynamox
9 Legend
•
20.4K Posts
0
July 3rd, 2007 04:00