This post is more than 5 years old
95 Posts
0
2987
Vplex Cross connect Architecture Design - Zoning for VMSC for Vmware
We will be going with Vplex Metro with VS2 dual engine configuration at each data center , we have IP only connectivity stretched between 2 DC's ( 5 Miles radius) . According to EMC we just need IP connectivity for Vplex clusters between 2 DC's . Question I have is how can we zone ESXi hosts for uniform access if we don't have FCOE or FC stretched between 2 DC's ?
Intech1
57 Posts
0
December 8th, 2014 11:00
You’ve answered your question.. You can’t zone cross connect.
Steve
sureshacs
95 Posts
0
December 8th, 2014 11:00
So we don't have a choice but to go for non uniform access ? where local esxi will be writing to local VNX and vplex replicating - synchronizing the LUNS ? Can we create a stretched Vmware DRS/HA using non uniform access please advise.
Intech1
57 Posts
1
December 9th, 2014 14:00
The magic is in the distributed volume. Failover will occur if (for example) if site 1 loses access to its distributed device (vvolume). Please read the post hear for further info. There has been a lot of discussion around srm. Look for posts from Garyo. He's the man. Not to discount others but I've found his responses invaluable.
Steve
Sent from my iPad
Intech1
57 Posts
0
December 9th, 2014 14:00
Of course the local VMs will not be able to access the remote vvol because your fabric is not stretched.
Steve
Sent from my iPhone
Intech1
57 Posts
0
December 9th, 2014 14:00
Sorry about that but my main point remains. The storage exists as an exact mirror, in your remote site. Vmotion to your hearts content. If the Datastore exists as an exact mirror then then any VM in the "Remote Site" will be able to access it.
Steve
Sent from my iPhone
sureshacs
95 Posts
0
December 9th, 2014 14:00
I am not asking about SRM we do have it in place we know how that works , question was for VPLEX non uniform configuration for vmware metro cluster across 2 data centers , can we Vmotion between hosts in 2 different datacenters.
garyo
89 Posts
0
December 9th, 2014 15:00
Thanks Steve, for the compliment, you're too kind!
sureshrambo
As Steve says, uniform or non-uniform access, the ESX host still sees the same virtual-volume.
Uniform access, it sees twice as many paths to the same volume (assuming you've configured each cluster equally.) For Uniform access we highly recommend PowerPath/VE with auto-standby so the local cluster's paths are preferred and the remote cluster's paths are only used in the event of local cluster path failure.
Non-uniform access, each host sees only it's local paths, but the magic of VPLEX cache-coherency across distance returns the right data at the right time to the requesting host.
VMotion and all VMware functionality works the same whether you are non-uniform or uniform access. Uniform access does give you a bit more high availability, but at quite a cost and is not a requirement.
The only thing that is possible (but I would never suggest in practice) is to get front-end cross connect in an IP WAN world would be to use FCIP gateway boxes to merge fabrics across two clusters. Unfortunately that is quite a lot of cost and management complexity for not a lot of added benefit, in my opinion.
Thanks,
Gary
sureshacs
95 Posts
1
December 9th, 2014 15:00
Awesome Gary and Steve ,thanks for detailed answers -
We don't have FCOE presence in our 7K's , only our IP network will be stretched , that's exactly why I was asking if we could use non uniform access and still get all the Vmware stretched cluster functionality in place and make it more simple in terms of mgmt and easy to troubleshoot .
One more question guys , within vmware do you guys recommend to have 2 separate DRS-HA clusters site specific ( for esxi hosts) or can we mix and match HOSTS in the clusters ?
sureshacs
95 Posts
0
December 9th, 2014 15:00
Perfect I get it , as long as we can Vmotion across both sites between 2 clusters we should be good to go , our L2/L3 is stretched but not FCOE ..
Thanks for answer.
Intech1
57 Posts
1
December 9th, 2014 17:00
I vote to keep it simple/yet achieve maximum project benefit. 2 separate cluster.
Steve
Sent from my iPad
Intech1
57 Posts
1
December 9th, 2014 18:00
Agreed. Eloquently put.
Steve
Sent from my iPad
sureshacs
95 Posts
0
December 10th, 2014 06:00
Great Guys -- Appreciate your help .