Direct connection between the host and the array limits the number of hosts that you can connect to the array; and does not allow the multipathing solutions to work properly (data unavailability concerns if a particular path fails). Specially in the case of an XtremIO array - you might not be able to scale to the maximum performance potential because you might not have enough hosts driving enough IO to the array.
EMC has not tested the direct connect scenario and it is recommended that if the customer absolutely wants it - then have the EMC account team submit an RPQ asking for it to be supported/tested.
Thanks Avi, I think the customer was interested in this for a Test environment for customer demo purposes. But I will clarify. Many thanks for the feedback.
Building on what Avi said, when we’ve used direct connect in the lab we ran into an issue where the FC link doesn’t get reestablished after a reboot. We need to wait until the controllers are back up and then manually unplug and replug the fiber into the HBA ports. Not something that would work in production. My guess is that the FC switches retry the links more frequently and for longer than the HBA ports. So the switches reestablish the link after reboot while the direct connect doesn’t. There’s probably something we can do about that if it’s a customer request, but for now it’s not something that I’d recommend for customers.
Miroslav Klivansky, Consultant Technologist
XtremIO Business Unit – EMC Corporation
(408) 566-2109 office; (408) 596-6009 mobile
Follow us on Twitter: @xtremio
When you say "the FC link doesn’t get reestablished after a reboot" are you refering to the host reboot, or the XIO Node reboot?
Also, what OS did you use for your testing?
We had an RPQ for Direct Connect using W2k8 R2. We tested using PowerPath without any issues. We tested cable pulls, array node reboots, and configured for SAN booting.
It's been a while, so my memory is a bit fuzzy. From what I recall, the reboot in question was for the storage array controllers. It's a lab environment, and we were running frequent updates between alpha and beta builds that would do NDU reboots of the storage controllers as part of the update. The hosts were running ESX and using native multi-pathing (no PowerPath). If I remember, we could get the link back by rebooting the hosts, but an easier way was just to walk down to the lab to unplug and reconnect the fiber. At the time, I was thinking that the HBA's give up retrying the links too quickly. They should retry frequently at the beginning, but then go back to something like retrying once every 5 seconds indefinitely. A different brand of HBA, different driver, or even different OS might have different results.
Take care and hope that helps,