Start a Conversation

Unsolved

This post is more than 5 years old

14073

May 19th, 2011 09:00

Best Practice for VNXe, vSphere 4.1 and iSCSI

So far I have been unsuccessful in finding a document that outlines the preferred method for configuring vSphere 4.1 with a VNXe 3300 using iSCSI. I have been over Chad's blog articles and the VMware documentation for setting up iSCSI along with NMP and Round Robin. This is what we have currently configured and it is working. Essentially, I have two ESXi hosts with iSCSI configured on two separate subnets. Each host has two nics (a total of 8 1GB nics are in each host) currently running to each of two switches. Those switches connect to the VNXe with one port from each SP in each switch (Each SP has 4 1GB NIC ports). The iSCSI target on the VNXe has an IP address in each subnet bound to it. Like I said, the setup is pretty straightforward.

So my dilemma is this...is this the best way to set up this environment? I get the feeling it is not, though it works and provides the fault tolerance we want. It seems to limit us, though, in terms of iSCSI bandwidth as we are not trunking the connections from the VNXe to the switches to get the aggregated bandwidth. can someone point me to a best practice for this type of setup and/or outline how this should be set up to best utilize the connections we have.

727 Posts

May 19th, 2011 12:00

Link aggregation is most useful in applications where there are many clients connecting to a server. LACP provides both path failover and load balancing, but it does load balancing on a per connection basis. You can aggregate 4 ports together but you will only use one port at a time if you have just one host connected to one iSCSI server.

You can use more of the bandwidth if you use multi-pathing instead. With multipathing you can setup a second IP interface (best practice is to have these on separate subnets) on an iSCSI server and use SCSI level load balancing. On the ESX side, you need to create the necessary network setup to connect to the VNXe. You will need two vmkernel interfaces connected to two vmknics. Best practice is to put one vmkernel interface on the same subnet as the one of the iSCSI server interfaces and the other vmkernel interface on the same subnet as the other VNXe iSCSI server interface.

Here's a link to Chad Sakac's blog on some of the details around this. There are at least a couple of the commands that have to be done from the command line.

http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html

Pay very close attention to the details related to associating vmkernel interfaces to the iSCSI initiator. Skipping that part  can cause the iSCSI initiator discovery to fail.

The load balancing algorithms in the native NMP plugin for ESX are fairly basic but they provide some benefit. The algorithms in Powerpath for ESX work substantially better.

The other option is to create generic iSCSI LUNs that are presented directly to the guest VMs. In the Windows case, the native multipathing drivers do a little better job with load balancing than the native ESX. I won't claim to have done the definitive investigation on that. There are trade offs when you present storage directly to VMs. It adds some complexity to the configuration.

Even if you do use multipathing, you will still only be using two ports at most for a given LUN. If you can spread the application across multilple LUNs then you have the opportunity to create additional iSCSI server instances on both SPs that use different port pairs.

13 Posts

May 19th, 2011 13:00

Thanks very much for the descriptive reply. What you have described here is exactly how my setup is currently configured. What I am trying to ascertain is whether or not this is the best method by which to connect to the VNXe. You mention Powerpath. I am not up to speed on this product yet so I will look into that. It just seemed to me that there should be a better way to utilize all of the available ports on the VNXe and my ESXi hosts to gain greater bandwith.

4 Posts

June 6th, 2011 18:00

You've mentioned in your post that "With multipathing you can setup a second IP interface (best practice is to have these on separate subnets) on an iSCSI server...".

Why do we need to use 2 NICs from the same SP on separate subnets? What are the benefits and of course the drawbacks? Why is that a best practice?

By configuring on separate subnets I am sure you restrict the VNXe's ability to aggregate the links on the SP ports.

3 Posts

June 17th, 2011 07:00

If you put your iSCSI servers on each SP with 2 nics and connect them to different switches (none LACP) with each switch having only 1 of the subnets its easy to differentiate them. I tend to lock them away in a vlan aswell to isolate the iSCSI traffic. Then you can use Roundrobin and gain access to both nic ports. default io rotates when 1000 io has reached. You can alter the behaviour for example set it to 1 io then rotate.

12 Posts

June 20th, 2011 07:00

Why use iSCSI at all? if you have VNXe then you can use NFS which will give you just as good performance.

2 Intern

 • 

20.4K Posts

June 20th, 2011 08:00

what if you need RDM ?

June 20th, 2011 09:00

pwhyton,

Have you run IOMeter against your NFS data stores?  I'd love to see your results.  I would prefer to use NFS, but my setup of 10 SAS 300 GB drives in two RAID 5 (4+1) arrays was showing very poor performance numbers.

Best Regards.

4 Posts

June 20th, 2011 15:00

This may not be helpful but not everyone knows that NFS is file protocol and iSCSI is block protocol – hence performance difference.

June 20th, 2011 15:00

Phokay2010,

Not helpful.  Everyone know this.

What I am trying to find out is why identically configured VM's on the same VNXe perform poorly with NFS and perform up to 5 times better with iSCSI.

Best Regards.

4 Posts

June 20th, 2011 15:00

If you need more performance, add more drives into your RAID groups. More spindles, more throughput(MBps)/IOPS.

57 Posts

September 30th, 2011 06:00

Hi All,

As per the configurations till now that I have suggested customer these are the common details out of them.

  1. Create 2 ISCSI servers one on each SP.
  2. Aggregate base ports on eth2 and eth3 on SP.
  3. Connect the SPA ports to switch A and SPB ports to switch B
  4. Create Lag group on Switch a with SPA ports only that is 2 ports in the same way create it on SPB.
  5. Make the identical configuration on Switch A and B ports.
  6. Enable fast ports on Switch A and B.
  7. Create equal amount of LUNs on each SP as per the requirement.
  8. Connect VMWare on switch A and B.

Now start the load balancing part on VMWare.

  1. Before migrating the VM to VNXe make sure you make a list of VM sorted as per the IOPS.
  2. Then move all the even number of VMs to LUNs coming from SPA and Odd number of VM to SPB.

For a testing purpose reboot the following and check the access to LUNs:

Note: reboot the following component and wait for it to come up and then reboot the next on.

  1. Switch A
  2. Switch B
  3. SPA
  4. SPB
  5. That’s all if all the tests are ok then your environment is setup perfectly.

If anything went wrong then I suggest then you can open a case with the VNXe technical support team to help you fixing the issue.

Thanks,

Rohan.

2 Posts

October 27th, 2011 10:00

I recently setup an iscsi server on a VNXe 3100 system. The IP address 10.0.4.25 I used for the iscsi server belongs to a dedicated subnet 10.0.4 .

I then created a generic iscsi storage and presented it to a ESX host that I had already added to the VNXe hosts. I gave it permissions for "virtual disk and snapshots".

On the ESXi (4,1) side, I setup a vmkernel portgroup with an IP on the same subnet 10.0.4. I then enabled software iscsi and did a dynamic discovery of my vnxe iscsi server, it was able to see it. But when I try to create a new VMware datastore > Disk/LUN otpion, I do not see any targets. I re-scanned the HBA couple of times and I also tried a static discovery by manually adding the target. Still cannot see the LUN to create a VMFS partition on it.

Rebooted the ESX host, no luck.

Both ESX and VNXe see each other's targets, what am I missing ?

I'm not using CHAP on either end.

214 Posts

November 22nd, 2011 11:00

verman,

Have you checked the windows iqn matches the one you added on the VNXe for that host.

I noticed that I added a host to a domain after initially getting the iqn and it changed the iqn name so I had to change the name on the vnxe.

Regards

Smarti

75 Posts

November 29th, 2011 13:00

pdkilian,

What I've seen the NFS on VNXe is not really performing that well. Here is some IOmeter terst results with NFS/VMFS and also with SAS/EFD

Hands-on with VNXe 3300 Part 6: Performance

VNXe 3300 performance follow up (EFDs and RR settings)

@henriwithani

57 Posts

November 30th, 2011 08:00

 

Hi Verman,

If you have a VCenter server and added the VMware ESX host on VNXe from the VNXe giving the credentials of the VCenter host then you don’t have any problem with the IP address and IQN.

I think the Data store is already created and presented to the VMWare ESX.

Can you ping the ISCSI Server IP address from VMware using the vmkping and also try to ling the ESC VMKernal IP from the VNXe using the ISCSI Interface.

Are you getting any Error while creating the Data store?

Thanks,

Rohan Raj

No Events found!

Top