Start a Conversation

Unsolved

This post is more than 5 years old

F

38304

June 14th, 2012 12:00

Equallogic and XenServer 6

Hi,

I am looking for some Equallogic and XenServer 6 recommended practices. From what i can tell by going to Citrix and Equallogic support documents it seems like

* MPIO with LVM over iSCSI is only supported for redundancy not with load balancing. Redundancy is provided by bonding two NICs for iSCSI. IMHO that is not multipathing, there is only 1 path but well it is redundant. 

* StorageLink should be supported with Equallogic. But i can not determine if StorageLink is able to provide true MPIO towards Equallogic

Actually i would prefer LVM over iSCSI. But i gave StorageLink a try because maybe it could handle MPIO. 
My attempt to get StorageLink up and running was unsuccessful, the response i got was saying something like XenServer not able to query the array. I tried with and without dedicated management network on the Eql Group, in older XenServer versions you had to disable dedicated management on the group for SL to work. There are reports that SL with Equallogic is broken until SP1 for XenServer. 

It seems like XenServer favours iSCSI solutions with multiple subnets, which most linux based systems do. But you can succesfully run true MPIO towards Equallogic with RHEL. So does anyone know if it is possible for XenServer to go beyound utilizing 1 NIC with Equallogic?

Would it be supported to run the following changes on XenServer 6 and would it make XenServer pick up two NICs for iSCSI? I know this question should be directed to Citrix but maybe someone here knows.

net.ipv4.conf.all.arp_ignore=1
net.ipv4.conf.all.arp_announce=2
Since this will be a production system i would rather go with safe than fast. In this case i do not think 1 Gbps will be that much of a bottleneck but hey its 2012 ;)
cheers

5 Practitioner

 • 

274.2K Posts

June 20th, 2012 09:00

The answer ultimately will have to come from Citrix.  What are they going to support if you modify settings on the host.  Is it possible? Yes.   If/when you have problem what's their support going to say?  That I don't know.  

What I have seen so far, is that for VMs that really need more I/O, use the iSCSI initiator inside the guest to connect to their data volumes.   It's been called "Storage Direct" in some circles.  So if you are running Windows VMs or Linux VMs you can leverage the appropriate HIT kit.   The boot up disk would still be only using 1GbE port, but the I/O requirements of a boot drive are very small.

Just making those changes will NOT enable MPIO.   The Xen mgr, doesn't support configuring the interface files that open-iscsi uses to enable MPIO on the same subnet.   With different subnets you don't need to modify those files.  You have one default file for each subnet.   When both are logged in, Linux Multipathd sees the same disk serial number and creates an MPIO 'disk' for that set of disks.  Then Xen can partition it up as needed.

I have no idea when or if Citrix will support modifying those interface files.  

Regards,

November 26th, 2013 06:00

The Dell Storage team is proud to publish the following deployment and configuration guide:

Using Dell EqualLogic and Multipath I/O with Citrix XenServer 6.2 

Retweet!  https://twitter.com/PachecoAtDell/status/405038752730345472

This document describes how to configure EqualLogic   storage, including MPIO, in Citrix XenServer version 6.2 environments using   the software iSCSI adapter.  XenServer 6.2 natively supports MPIO with a   single discovery address.  Dell and Citrix have collaborated to provide   EqualLogic MPIO capabilities from XenServer.

Click to open the pdf file.


Authored by:  Donald Williams

No Events found!

Top