1. We are deploying a vplex dual engine and customer has requested to split the dual engine to both racks. Both racks are within the same datacenter. Both engine will still be connected via FC for the frontend and backend connectivities as per best-practice between racks. Will this post a concern?
2. Vplex documents in the EMC support website don't specify the power requirements of individual components of the vplex system, KVA and Amp of the management server, Engine 1, Engine 2, SPS, FC-com Switch and so on.
Please advise on the above. Thanks.
I cant see that splitting the Engines between Separate racks will post any Problems, You might need to Check with your EMC Customer Representative .
For the Power requirements have you had a look thru the Vplex Site Preparation Guide to see if this answers your Questions..?
Thanks for the info.
We did checked with EMC rep and we also brainstormed and don't think it will post any problem too. However we don't want to miss out any other implementation best practices that we might not know. Probably we might be the first to do that. If not, we hope those who has done it before will give us some advice.
As for the tech specs , yes we have gone through this doc and other vplex doc too. There's no breakdown of individual components specifications given.
Splitting the hardware is not supported. I would recommend if this is a hard requirement for the customer possibly opening a RPQ to make sure that you have the correct procedure. I am not 100% confident if it will be approved.
Thanks for your advice.
but what is the reason that it is not supported? Backend are all FC connectivities, not even network except for management server. We couldn't justify to this.
RPQ EMC own vplex ? EMC might not even entertain my request:).
We don't want to split the system too but there is no good reason and we hope to get more advise from the field that has encountered problems while doing that. Then we can advise our customer not to do it.
Have you received anymore information Regarding this request..? Would Think it might be Valuable Information for any new Community Peers which would like to do the same Configuration as what you were Planning.
We don't have other supportive statement or justified deployment or field experience from others to split the engines.
As this is not a simple setup, we are not comfortable to split the engines too. Therefore we manage to convince the customer to remain the best practice to have both engines on the same rack.
The VPLEX cluster is designed with certain failure modes in mind given the assumption that those engines are collocated in a rack. Trying to split multiple engines across racks changes those failure modes significantly and hence is not supported.
If I can better understand, why is the customer requesting that the engines be split across racks?
This is a concern for VPLEX implementations. In a typical data centre with SAN connectivity the best practice is to have two redundant fabrics. SAN cabling is then routed diversely for each fabric from host to switch to array to avoid accidental damage causing outages. With VPLEX in a single rack then all the fibre cabling is concentrated at this one point. There is then the potential to impact both fabric paths at the same time.
We have raised an RPQ and it was rejected. As well as some internal SAN switching for dual engines there is also some serial cabling between internal UPS and BBU in the cabinet.
RPQ rejection is expected.
The serial cables are to the individual engine isn't it? Should not impact the splitting of the engines.