Unsolved

This post is more than 5 years old

7 Posts

63191

March 23rd, 2015 10:00

How do you setup MPIO in a VM

hopefully this is an Equallogic query as its implies Del MEM is setup correctly which i have done

My Question is that MPIO in a VM needs2 dedicated ISCSI NICS on the SAN subnet not 1

I have 2x 10GB SFP cards in each of my esxi hosts (4 sfp ports in total)

Now this isnt documented anyway

But i use Veeam and im setting up a veeam backup Proxy. 1 x NIC to Main production LAN on seperate switch, and 2 x Nics for dedicated ISCSI San Subnet on another switch

However can someone tell me how correctly to setup the vSwitch2 for the SAN subnet .

Do i just create a new Virtual machines Port group and make one Adpater as the primary and the second adapter unused, then the reversed for the 2nd Virtual machine Port group.

Then on my Veeam Backup Proxy VM, i add the 2 x Virtual machine Port groups and in ISCSI iniator add the 2 IP addresses.

Does this make sense?, it then means im actually utilising 2 x Vnics for upto 10Gb throughput

My Equallogic Volumes already have the IP addresses assigned that i will be using in the Vnics

4 Operator

 • 

2.4K Posts

March 23rd, 2015 11:00

As a Dell EQL/VMware/VEEAM/vRanger administrator i can say that you should consider the HotAdd mode:

- Veeam proxy most likely use the Hot-Add vDisk Mode and if not it use NBD or direct SAN at the end (unsure). Other product "support" Backup from SAN only on pyhs. systems.

- Veeam backup from SAN Hardware Snapshot require Veeam Enterprise Plus license and a supportet SAN like NetApp/LeftHand (Dell EQL isnt on the list). But using  Backup from SAN is a valid option and will work with EQL. Besure that you set automount disable and keep finger crossing that windows will never initialize or writing to that VMFS mounted volume.

Settings up iSCSI in a VM guest makes no different compared to a phys. System. Create two "VM Network" Portgroups labeld  "Guest iSCSI-1" and "Guest iSCSI-2"

- Dell support will tell you that you should place these 2 Portgroups on a vSwitch which use 2 of your LAN pNICs. But i suggest that you should use the same pNICs as the esxi use for the iSCSI traffic. Check if you have enabled jumboframes on the vSwitch. Add only one pNIC to every Portgroup and the other as a Standby. This is a different setup than iSCSI on a ESXi Host.

- Add 2 more vNics to the VM and specify IPs from iSCSI network. If you see a performance difference when using jumboframes setup MTU 9600 in the drivers. (vmxnet3 needed for that task!)

- Set automount disable and automount scrub within diskpart

- Install Microsoft Hit Kit 4.7.1 and and configure MPIO

- Modify the EQL ACL of the volumes

Again.... please test both methods "HotAdd" vs. "Backup from SAN" and if the second one isnt 2345x times faster stay at HotAdd beause you dont have to configure any networking and IP stuff. Also there is no access to your VMFS volumes from a windows system and your VMware Datastore will be save.

Regards,

Joerg

4 Operator

 • 

2.4K Posts

March 23rd, 2015 13:00

If you already have a phys. Veeam server you consider to add a 10GbE nic there and connent it to the iSCSI SAN network. This setup would be a exact match with any best practice and installation guide ;)

To get the best out of EQL MPIO you need 2 extra vNICs inside the VM. Each one is connected to a differrent VM Portgroup. If you only use one vSS you get a little extra

vSwitch1

- PG "Guest iSCSI-1"

|-- vnic2 (active)

|-- vnic3 (standby)

- PG "Guest iSCSI-1"

|-- vnic3 (active)

|-- vnic2 (standby)

You can override the NIC Failover policy on PG level. In a case of a nic/cable/pSwitch problem all the redundancy is made by the ESXi Host. From a VM and EQL MPIO perspective there would be always 2 paths available as a minumum.

If you have sperate your pNIC to different vSwitches you end in something like this:

vSwitch1

- PG "Guest iSCSI-1"

|-- vnic2 (active)

vSwitch2

- PG "Guest iSCSI-1"

|-- vnic3 (active)

and failover stuff is performing by the EQL/Windows MPIO.

Some notes. With Veeam most of the time you have jobs with a single full backup at the beginning and than something like Reverse Incremental which is like Incremental forever and a synteticrebuid for  a full backup.

So most time you performing incremental backups. In my setups the source isnt the bootleneck because its the backup repository.  I cant see that you fill 2x10GbE pipes and that there is the CPU power within the proxy.

I have a lot 10GbE setups with vRanger+DR4100 and Shared SAS/1GbE iSCSI SANs + Veeam and all of them use HotAdd. Filling a 1GbE pipe is easy with both products when using Backup from SAN or HotAdd (NBD is different). So i cant help you there because i dont have numbers.

I have seen up to 700MB/s on Shared SAS setups with veeam but when taking a close look the VM was this provisioning or have unused space within the filesystem and reading and compressing Zeros is a easy task for every programm ;)

Regards,

Joerg

7 Posts

March 23rd, 2015 13:00

Hi Joerg  thanks for your answer, i am exactly in the same boat as you being a EQL/Veeam/Vmware Administrator.

As my Veeam management server is Physical i ruled out Hotadd. But Direct SAN is definatly seems faster than NBD and Hotadd modes but with the risks associated with Initialising a Volume accidently In Veeam Parallell processing made a massive improvement

I have seperate vswitch for network / host management traffic with 2 x dedicated 10GB SFP and then use another 2 x 10GB SFP nics for SAN access. on its on dedicated switch

Anyway you have  answered my question i have setup 2 x SAN ISCSI Network port groups, each one bound to a physical adapter on the switch. (previously there was only one port group before with both adapters in it)

with a VM i in direct iscsi model Is it best to have one adapter active and one in Standby rather than  One active and one unused

As im 10GB SFP all round my backups normally hover around 100mb/s i never seem to transfer data in veeam to my repository at over 1GB/s or ideally 10GB/s , What are you getting?

7 Posts

March 23rd, 2015 14:00

My Physical Veeam Server has 4 SFP Ports (2  for network and 2 Nics for ISCS) although they are teamed for now , but i need to break them to get MPIO workign correctly plus with 6 cores it comes in handy when backing up to my Dell DR4000 dedupe device which is also on LAG connection (2x10GB nics) , money has been spent wisely unfortunatley the setup hasn't

Reason for my VM Proxys was basically to reduce the risk of anyone mounting disks as no one would log onto these

So my setup will be like this

vSwitch0........... (vmotion , management)

vSwitch1 (SAN Switch)

- PG "Guest iSCSI-1"

|-- vnic2 (active)

|-- vnic3 (standby)

- PG "Guest iSCSI-1"

|-- vnic3 (active)

|-- vnic2 (standby)

Then make sure all mtu9000 is setup as well as the ISCSi iniators for both the Nics like you mentioned

Hopefully i can see some improvment tomorrow and ill post the results

Thanks for your help,

7 Posts

March 27th, 2015 08:00

Ok, ive created the Virtual Machine Labels in iscsi.

I went into Iscsi initiator, and clicked add session , ( selected iscsi guest0, groupid), click add session again and selected iscsi guest1, groupid).

Now in disk managment i see 2 paths to the iscsi mounted disk. Surely thats wrong

7 Posts

March 28th, 2015 17:00

Hi thanks for the update as soon as i posted this I found a google article that said click MPIO, and enable the option and reboot.

However i have since installed Equalogic HIT tool 4.7.1, Any one know if i still need to connect to each volume using add session and adding each NIC in at a time?

7 Posts

March 30th, 2015 10:00

Still getting issues, i have installed HIT tools 4.7.1 on my 64bit physical server.

Excluded the Ip address for the managment network lan in MPIO tab (SAN LAN range is left in)

now in control panel, iscsi, if i look at the LUNS , they still say inactive. i dont want to really click each lun, assign the relevant nic1 and then nic2  for each of them

am i doing something wrong i was expecting all the paths to connect fine

While a Veeam backup is running, it says  its using DirectSAN on the Veeam job, however in task manager all the traffic is still going thru the Main network LAN (10% utilization) and not the dedicated SAN LAN (showing 0% utiliization nic1 and 5% on NIC2)

Top