Start a Conversation

Unsolved

This post is more than 5 years old

6476

December 22nd, 2014 11:00

Help with DR setup for VMware 5.5 environment

Hello,

We are looking to setup a primary site where we will have 4-5 VMWare hypervisors utilizing a VNX5400. We would like to have the VMs LUN/s replicate to an offsite DR running the same hardware as described above. Setup would be HOT/COLD, where we would fully cut over all of the VMs from the PRIMARY SITE to the DR SITE.

We currently have purchased the following software:

For Hypervisor enviroment:

VMWare vSphere Std with Operation Manager

Storage Array:

VNXB OE PER TB PERFORMANCE

VNX522 Unisphere Block Suite

VNX5200 FAST Suite

VNX5200 Local Protection Suite

VNX5200 Remote Protection Suite

Questions we have are:

Does the software described above allow us the ability we are looking for?

Any input would really appreciated.

Thanks

joe

522 Posts

February 12th, 2015 12:00

Nah, Since you are going to use RecoverPoint from the previous threads, RP would handle all the replication and Mirrorview wouldn't be used. It's not even needed on the array nor should it be enabled if you plan to use the VNX splitter with RecoverPoint from the release notes:

2015-02-12_15-38-29.jpg

522 Posts

February 12th, 2015 13:00

The RP license will be a LAC file from EMC that is simply uploaded through the RP GUI once the cluster is formed and running (just point it to the file and it will load what you are licensed for - it is the very first screen you will see once you log into the RP GUI for the first time).

The multiple vRPA's aren't required due to multipathing, it is a cluster requirement to have a minimum of 2 in a cluster (4 if you have a DR site since each site is a "cluster" with RP). The resource load of the vRPA's will vary depending on their load since they will essentially act as any other VM on your setup...they will only scale so far depending on what type of vRPA you deploy (small/medium/large). I'm not sure if there are some overhead numbers in the release notes on the vRPA's or not...I'd have to check. If there are no bottlenecks in the setup to begin with, it shouldn't be overly noticeable.

26 Posts

February 12th, 2015 13:00

The vRPA will communicate with the VNX using the SP management 1G interfaces or would i have to re purpose one of the  10G ports that I showed above?

26 Posts

February 12th, 2015 13:00

Now its starting to click.. I want to thank you for all of this great information.

I'm looking over the website to see how use the RecoverPoint Licenses that I was given. Not sure if its a matter of downloading the OVF file and having to apply some license to it.

Currently have the VNX targeting a couple of LUNs via multipathing using 2x 10GB iSCSI per SP.Will I have to run multiple vRPA because of the multipathing? The question of how resource intense these vRPA came up in discussion as well.

522 Posts

February 12th, 2015 13:00

The vRPA communicates with the VNX to do the splitting over iSCSI - so there has to be iSCSI ports configured on the VNX for use with native iSCSI - either 1Gb or 10Gb. These don't have to be the mirrorview ports, but since you mentioned it, mirrorview should be disabled on the array and once that is done, you an either re-use those ports or use any of the other iSCSI ports (1Gb or 10Gb) you have available for the iSCSI requirements of RecoverPoint.

-Keith

26 Posts

February 16th, 2015 10:00

echolaughmk wrote:

The vRPA communicates with the VNX to do the splitting over iSCSI - so there has to be iSCSI ports configured on the VNX for use with native iSCSI - either 1Gb or 10Gb. These don't have to be the mirrorview ports, but since you mentioned it, mirrorview should be disabled on the array and once that is done, you an either re-use those ports or use any of the other iSCSI ports (1Gb or 10Gb) you have available for the iSCSI requirements of RecoverPoint.

-Keith

Currently I have two dedicated iSCSI ports on each SP for iSCSI via multipathing for the VMWare environment.

In order for the vRPA to handle the splitting, will I have to enable another iSCSI interface on each SP? Is so, would this have to reside on its own network range separate from the current production iSCSI networks?

Thanks

26 Posts

February 17th, 2015 07:00

echolaughmk wrote:

As long as they are segmented from the vRPA LAN/WAN, they can be shared with host traffic (NOT anything related to Mirrorview though) at the trade-off of possible performance degradation since they are shared interfaces. The key point is to ensure they are segmented from the LAN/WAN interfaces you would configure. Take a look at the Deploying vRPA Technical Notes on support.emc.com - that should detail the implementation process well. Best Practice would ideally have them separated for ultimate isolation for performance and scalability.  RPA1.jpg

-Keith

ISCSI1 & 2 are referring to the multipath iSCSI networks correct?

Currently have the following setup for the iSCSI:

iSCSI1: 10.160.19.224 /27

iSCSI2: 10.160.19.96 /27

Both on separate VLANs.

I was thinking of extending those VLANs to the current 10G interface I have setup for general VM Networks.

The LAN for RPA can be on the same network as the ESX MGT/VNX SP MGMTs?

522 Posts

February 17th, 2015 07:00

As long as they are segmented from the vRPA LAN/WAN, they can be shared with host traffic (NOT anything related to Mirrorview though) at the trade-off of possible performance degradation since they are shared interfaces. The key point is to ensure they are segmented from the LAN/WAN interfaces you would configure. Take a look at the Deploying vRPA Technical Notes on support.emc.com - that should detail the implementation process well. Best Practice would ideally have them separated for ultimate isolation for performance and scalability.  RPA1.jpg

-Keith

26 Posts

February 17th, 2015 09:00

echolaughmk wrote:

LAN can be on the same as the ESX/VNX - won't be a huge amount of communication going on there.

As for iSCSI, you should be OK with how you have them setup from a VLAN perspective.

I will need to add different iSCSI IPs for each vRPA since I will still be connecting the ESX host to the SAN via ISCSI at the same time.

So thinking the following:

ESX iSCSI host setup:

iSCSI1: 10.160.19.224 /27

iSCSI2: 10.160.19.96 /27


vRPA1:

iSCSI1: 10.160.19.225 /27

iSCSI2: 10.160.19.97 /27


vRPA2:

iSCSI1: 10.160.19.226 /27

iSCSI2: 10.160.19.98 /27



On the iSCSI side, the iSCSI traffic doesnt need to be routable between the protected and recovery site since the WAN will be used for the replication correct?

522 Posts

February 17th, 2015 09:00

LAN can be on the same as the ESX/VNX - won't be a huge amount of communication going on there.

As for iSCSI, you should be OK with how you have them setup from a VLAN perspective.

522 Posts

February 17th, 2015 09:00

Correct...the vRPA WAN configuration will be doing the replication and not the iSCSI at each site (assuming you have a VNX at the target site setup the same way with the RPA's).

26 Posts

February 18th, 2015 10:00

echolaughmk wrote:

Correct...the vRPA WAN configuration will be doing the replication and not the iSCSI at each site (assuming you have a VNX at the target site setup the same way with the RPA's).

Just to make sure I'm not spazing out here. Are the vRPAs basically splitting the current iSCSI traffic that is established between the ESX hosts and the SAN/LUN?

522 Posts

February 18th, 2015 11:00

The VNX will be your splitter still. The vRPA's are only using iSCSI to directly talk to the VNX for storage operations since they are virtual (repo, jvols, production and copy volumes). If they were physical RPA's, it would be in-band through FC, so nothing will change other than the hardware involved and how it talks to the array. You will end up masking those iSCSI ESX LUNs to the vRPAs just like what would happen if you were using the physical RPA's and dropping them all into a single storage group, but the splitter will still be the VNX splitter licensed for RP/SE (since you are using vRPA's).

26 Posts

February 18th, 2015 12:00

echolaughmk wrote:

The VNX will be your splitter still. The vRPA's are only using iSCSI to directly talk to the VNX for storage operations since they are virtual (repo, jvols, production and copy volumes). If they were physical RPA's, it would be in-band through FC, so nothing will change other than the hardware involved and how it talks to the array. You will end up masking those iSCSI ESX LUNs to the vRPAs just like what would happen if you were using the physical RPA's and dropping them all into a single storage group, but the splitter will still be the VNX splitter licensed for RP/SE (since you are using vRPA's).

So would it best to separate the current iSCSI traffic? Meaning create a separate iSCSI network for just for the vRPAs?

522 Posts

February 18th, 2015 12:00

Yes, ideally. That was what I was trying to get at above. It can share it, but for performance and scalability it would be ideal to isolate it if you can on both the VNX side as well as the virtual side when you create the virtual networks.

No Events found!

Top