I have a customer that is running 400 VM's (VMware) connecting to a shared storage ( HP 3PAR)
Customer is concerned about the single point of failure with respect to the storage in his datacenter and hence wants to introduce a second block storage in his datacenter. (he does not want to introduce HP as the 2nd Storage Unit because of vendor customer issues)
Should the first HP 3PAR storage fail for whatever reason - he wishes the second storage device to pick up immediately
Customer has no 2nd DR site - this is within the same data center
Would positioning VPLEX as a solution in his current datacenter make sense to ensure that data from storage#1 is available in Storage#2 at any time - and in the event of the failure of data from Storage #1 - the cluster in the datacenter can continue to access data from storage#2 ?
Would this work ?
Yes, this would work with a VPLEX Local where you would bring both arrays behind the VPLEX and create RAID1 virtual volumes to present out to your ESX hosts. The legs are mirrored by the VPLEX so every write will be mirrored to both legs of the RAID1 devices (HP array and new array) and then acknowledged back to the host. With this configuration you will have redundancy in the back-end should an entire array fail - it will be seamless to the host.
A note about this is to use the same storage class of arrays when doing this for performance reasons. For example, you don't want to mirror a VNX and VMAX in the back-end if you can avoid it, mainly cause your writes will be as fast as the slowest array. This should be negligible in most environments and is supported 100% either way, but just figured I would mention it here.
This use cases is used by many customers for the exact same reason above - to prevent against storage array SPoF within a datacenter. The beauty of this solution removes any mirroring overhead needed at a host level and makes the management and future tech refresh completely transparent to the end user with data mobility within the VPLEX.
Thank you so much for that detailed reply Keith. Really appreciate the time taken to clarify my doubt. It really helped.
Since the customer is currently on an iscsi internal network, I will have to get him to upgrade to FC internally as that is a prerequisite for vplex I am told?
The VPLEX is BLOCK only
The connections from the server to the SAN must be FC
The connections from the VPLEX to the SAN must be FC
To determine the number of engines you require you will need to know the IO profile of the environment then you have the choice of a VS2 or VS6 depending on the IOPs required.
How are the servers currently connected to the HP 3PAR?
You will need the 3PAR to be connected to the same FC Switch as the VPLEX and the servers
Hope that makes sense
Yup...VPLEX is all FC like tadhg mentions above.
All hosts, storage, and infrastructure will need to run FC to use the vplex. Once that happens, you have a couple of ways you can migrate to the vplex and get it all behind it while they get the second array straightened out.