Start a Conversation

Unsolved

This post is more than 5 years old

2390

January 13th, 2011 13:00

Virtual SAN host NIC Question

BACKGROUND:

We recently got MS Hyper-V and and are wanting to create some virtual servers that will be connected to the SAN. We have a CX3-10 DELL/EMC SAN. When using a physical server, they have 2 physical NIC's with 2 ports each on them. One NIC for LAN and one NIC that is dedicated for iSCSI SAN use only (on both ports). One port is for a 0.x subnet leading to a DELL switch. The other port is for a 1.x subnet leading to a 2nd DELL switch. We also use PowerPath for redundancy. The other LAN NIC has both of its ports teamed to create a single virtual NIC. That is how we do physical servers.

QUESTION:

The hardware of our Hyper-V server is newer and it has a single onboard NIC with 4 ports and then a PCI slot NIC with 4 ports of its own. The PCI NIC will be dedicated for SAN use only, but I am trying to figure out the optimum way to make use of all 4 ports. And if there is even any benefit (in the form of higher I/O throughput, etc.) to using all 4 physical ports or just 2? This is an Intel gigabit card we are talking about BTW. So there are several scenarios I came up with and would like to know which is best and why/whynot? Right now, the plan is to only have one virtual server (guest) that is connected to the SAN which we will call FILESERVER for purposes of this question. It is a file server that contains tons of data utilized by 600 nodes or so.

1.) on the Hyper-V physical server itself, team these 4 SAN ports into 2 Virtual adapters, and then present these 2 virtual adapters as SAN0 and SAN1 respectively, to the FILESERVER guest machine, for connection to the 2 redundant DELL switches (0.X & 1.X iSCSI subnets).

2.) present only 2 physical ports to the FILESERVER guest machine and set it up just like our existing physical servers - no real change in how we are already hooking up hosts to our SAN

3.) present all 4 ports to the FILESERVER guest machine (unteamed) and then use all 4 connections for SAN traffic. WOULD THIS EVEN WORK? Since we only have 2 iSCSI subnets and switches, then I don't see any benefit to this or if it is even a possibility. But management wanted me to find out if it was possible in case it would somehow increase I/O ability and/or redundancy. If this would even work with only 2 iSCSI subnets, how would it affect Power Path. etc...I am hoping that option 1 or 2 is gonna be an obvious solution once you experts read this

OTHER:

Is this a scenario where PowerPath VE is warranted??? I don't know anything about it (yet) but I expect it is really for more complex situations and maybe even when using VHD's that boot from the SAN itself (which we don't do)

Let me know if you need clarification on anything and thanks in advance!

86 Posts

January 14th, 2011 02:00

Hello

What you are asking is fairly complicated and I will attempt to answer them.

My initial observation and I may be wrong is that you are proposing all your LAN traffic through 1 multiport NIC and all your SAN traffic though a different multiport NIC. If this is the case then you have 2 single points of failure one for the LAN traffic and one for the SAN traffic.

It is possible to setup all 4 iSCSI ports as separate entities - after all they have separate IP addresses - the fact that 2 of the ports are on the same iSCSI subnet is immaterial. And can lead to greater flexibility.

The interesting thing about iSCSI from Hyper-V is that you can have 2 different models.

Model 1: iSCSI from the Hyper-V parent - and pass the devices through to the child OSes

Model 2: pass the NICs to the child OSes and iSCSI straight from the child OSes

These 2 models require only a change in how you manage your connections. However by having 4 NIC ports not teamed does increase the flexibility.

As far as NIC teaming the SAN NICs are concerned this is not supported by the Microsoft iSCSI initiator - consequently not supported by EMC.

Sam CLaret EMC TSE3

16 Posts

January 14th, 2011 12:00

I really appreciate your reply.

You are correct in your initial observation. Although we are currently successfully teaming broadcom NIC's on Hyper-V parent and passing them over to guest VM OSes, I have decided to eleminate method #1 as an option to eleminate an extra layer of possible failure since this is for SAN use, and that is not officially supported. The other virtual servers we are doing this with are not connected tot he SAN. So having read your response, here is my elaboration and continued questioning:

I can take Option #2 because that is the closest scenario to how we do things on non-virtual servers that are connected to the SAN. And there is all the same redundancy in place as the non-virtual servers, and the Navisphere would see those virtual servers as physical and have no idea they were virtual. Whereas, iSCSI would be configured on the FILESERVER OS and not the Hyper-V parent, as well as the Powerpath software, etc. also configured on the guest OS. Basically the way we already do things on non VM servers.

OR

Option # 3, which requires some clarification. Ordinarily, I'd think for Option # 3 to work, then I'd need to either have 4 VLANS or 4 seperate DELL switches composing 4 seperate iSCSI subnets. As mentioned, right now we only have 2 iSCSI subnets, and 2 DELL switches deidcated for SAN use. Are you saying that I could simply take port 1&2 on the NIC and give them seperate IP's and the same iSCSI subnet, and then do the same for ports 3&4 on the same NIC? It gets a little confusing to me on how iSCSI, Powerpath and Navisphere would handle 4 seperate iSCSI IP's (on 2 seperate existing subnets). As we were shown when this SAN was deployed by DELL, we use 2 switches, 2 subnets, and 2 NIC ports to create the redundancy needed. SO assuming I can use all 4 NIC ports, then is that going to ONLY give me more redundancy, or is that going to incread I/O potential too???????

Also, despite whether Option #2 or Option #3 is selected, I would still setup ALL iSCSI and power path configs on the guest OS and not on the Hyper-V server - this is Model 2 you describe.

I still think the "cleanest" way is to use Option #2, but if there is a benefit to be had by Option #3, then I want to know; rather management wants to know. They want to know if throughput is increased. How does the iSCSI traffic work? Does it just use one path at any given time and then everything else is just for redundancy, or does it load balance/allow for mutiple data streams all happening at the same time over 2 or even 4 seperate NIC ports & iSCSI IP's?

I might also add that there is the possibility that the Hyper-V parent might also share these 4 NIC ports with other Non-SAN VM's, so Option #2 would be the only way the FILESERVER would get its own dedicated ports...in Option #3, they not only want to know if using 4 ports instead of 2 for iSCSI will speed things up (which we do not surrently even have a speed issue), but also if the 4 ports can still have other traffic going over them without it affecting the SAN traffic to FILESERVER. Without KNOWING for sure, I think OPtion #2 is the safest, but still they want to know if Option 4 is feasible for more than just redundancy. Hoping you can break it down for me

Thansk again!

Message was edited by: OHCA

54 Posts

January 14th, 2011 23:00

Next part of the answer.

iSCSI in and of itself does not do any load balaning or path failover. This is down to the multipath solution you implement.

The solution sold by EMC is PowerPath. Now load balancing to a Clariion is based on the characteristics of a Clariion.

Specifically a LUN is owned by either SPA or SPB and IO only goes to that LUN from either SPA or SPB depending on which is the owner of the LUN at that specific time. It is bad policy to set multipathing up as round robin as this would force the LUN to trespass between SPA and SPB and cause serious performance issues.

So say a LUN is owned by SPA then IO has to be routed to SPA. PowerPath is aware of the active-passive nature of Clariion arrays and so any paths connected to SPA ports (SPA0, SPA1, .....) will be correct.

If SPA crashes or for some reason all the paths to SPA were unavailable PowerPath would request a trespass of the LUN to SPB and then IO would failover to SPB paths and any SPB paths would be load balanced accross the NICs available to those SP ports.

I here people ask - but what about ALUA - this is a mode whereby IO to the 'passive' SP is honored by a passthough mechanism to the 'active' SP.

This facility is only active under very specific circumstances in a path failover situation and if a SP is genuinly crashed then IO pass through will not take place.

So by correct planning and path setup you take advantage of both failover and load balancing.

Sharing NIC ports between SAN and LAN traffic even across different VMs. Basically dont do this as it introduces contention on the NICs. It is not recommended and is contrary to EMC best practice. Also if you have performance issues sharing LAN and SAN traffic on the same ports causes a nightmare in trying to identify where the issue is.

Thank you

Sam Claret EMC TSE3

54 Posts

January 14th, 2011 23:00

OK I think I see what you are asking.

Let me take a step back and answer the question about the ports on the SAN network first and then come back to the specific questions.

Forgive me if some of this is basic and it may come across oddly but I want to build it up in layers.

Each NIC port has its own IP address. for example a nive simple scenario

IP: 192.168.1.1 Mask 255.255.255.0

IP: 192.168.1.2 Mask 255.255.255.0

Both of these IPs coexist on the same VLan

next lets have the IP ports for the Clariion and say they have

IP 192.168.1.201 Mask 255.255.255.0

IP 192.168.1.202 Mask 255.255.255.0

again these coexist in the same VLan.

When you configure the Microsoft iSCSI initiator you should and this is best practice configure a path for each connection you want to make specifically.

So you add a path for the first NIC - 192.168.1.1 and you specify that this path will use that NIC. Then you add to it the end point IQN - which for argument sake is 192.168.1.201.

You similarly do the same for 192.168.1.2 connecting to 192.168.1.202

now for the other VLan - remember the host knows nothing about the VLan - only what it can see (ping). so arbitrarily we have

192.168.2.1 -> 192.168.2.201

192.168.2.2 -> 192.168.2.202

and you do exactly the same.

This may seem like teaching your grandmother to suck eggs - but given that a VLan can be accross multiple switches you can build in multiple redundancies in this - for example the different Clariion ports may actually be connected to different switches and the same for the NICs so you can also setup crossover paths

192.168.1.1 -> 192.168.1.202

192.168.1.2 -> 192.168.1.201

This guards against a specific switch going down giving alternate paths for PowerPath to switch the IO down.

If on the other hand you have everything on the same switch then you are guarding against different end points going down.

Common connections without the crossover would be

HBA port 1 -> Clariion port SPA0

HBA port 2 -> Clariion port SPB1

however if SPB for some reason paniced then the path through HBA port2 would be eliminated. to improve the redundancy here you could also build paths in the iSCSI initiator as follows

HBA port 1 -> SPB0

HBA port 2 -> SPA1

As far as at which level you do this and with how many NIC ports you do this that choice is entirely up to you.

I hope that makes it clearer.

Sam Claret EMC TSE3

4.5K Posts

January 17th, 2011 21:00

A couple additional points:

1. if you are using PowerPath Basic (unlicensed) then you can only use one NIC to two SP ports - SPA and SPB

2. You can not use the 192.168.x.x subnet for iSCSI

3. You shoud have different subnets for both the host and the array iSCSI ports

NIC1 <--> SPA0 - subnet A

NIC1 <--> SPB1- subnet A

NIC2 <--> SPA1- subnet B

NIC2 <--> SPB0- subnet B

Please see Primus emc245445 for more information about using iSCSI and some best practice recommendations.

glen

86 Posts

January 19th, 2011 07:00

My apologies - I was only using them as examples also if the different fabrics are on different VLans then the different subnets is really a matter of manageability.

As far as PowerPath Basic is concerned - the iSCSI initiator is the object PowerPath sees as the HBA and there is only one Microsoft iSCSI initiator. The requirement for a single NIC is relaxed in this case as PowerPath doesnt see the configuration at that level in iSCSI unless using TOE cards.

Sam

16 Posts

January 19th, 2011 09:00

Well thanks again for the replies gentlemen...I am wanting to elaborate more but I need to post some images to do so...but I get an error when trying to upload an image; even though the image is below the data size and pixel limits. I tried .jpg and .png to no avail. Any tips?

But until then, I still need to fill you in a bit. Let me 1st correct myself. "IF" we did use all 4 physical NIC ports on the PCI NIC, then we might also attempt to share them with other virtual machines that WERE on the SAN. Rather, if they were shared at all with virtual machines other than "FILESERVER', then those other VM's would be connected to the SAN because all 4 NIC ports would be plugged into the iSCSI network and not the LAN.

So having revised my questions a tad, here are the 2 questions that mattered the most. For purposes of question #1 below, let's just forget we are even talking about Virtual machines right now and pretend we are talking a physical DELL server called 'FILESERVER'. I might also add, that we use a licensed version of Power Path but only one license key so it is my understanding that even though we see 4 paths, that only 2 of them are being utilized at any given time and the other 2 are for redundancy only unless we added on a paid plug-in for Load Balancing on Power Path?

1.) Is it even possible to use physical iSCSI NIC 4 ports instead of just 2 like we are currently doing on all our production servers that connect to the SAN? Keeping in mind we only have iSCSI 2 subnets and not 4 of them. Right now we call these 2 ports 'SAN0' and 'SAN1' respectively, since our iSCSI subnets are:

10.0.0.x

10.0.1.x

I was thinking we'd need 2 additional subnets to utilize 2 additional ports; unless we can for sure assign 2 iSCSI IP's on the same subnet to port 1 & 2 and then 2 more on the 2nd subnet to ports 3 & 4? Then 'IF' that is possible (which I think is based on the previous replies), would there be a tangible gain in throughput using 4 NIC ports instead of just 2; especially considering our current Power Path setup. Would there then be 8 paths showing in Power Path instead of just 4 like we currently see?

2.) Assuming all the above is good, and that is the way we elected to do things, then would there be any issue with letting some additional SAN VM's utilize some or all of those 4 ports that are already passed to the 'FILESERVER' VM? We have no plans to do this right now, but "they" still want to know if it is possible. If so, then would other VM's using sharing these ports negate any gain from using 4 total iSCSI ports in the first place?

Also, I guess I will add in a #3...Is the iSCSI Load Balance policy 'Round Robin' the best choice or 'Least Queue Depth' for Server 2008 & Server 2003 and considering we have Power Path? Since we currenlty only have 2003 production servers connected to the SAN, we have always used Round Robin because that is what we were shown to do by DELL. But I thought I read somewhere that in 2008 it was not the best choice or does it even matter?

many thanks

Message was edited by: OHCA - I had something backwards I needed to correct

Message was edited by: OHCA

4.5K Posts

January 25th, 2011 13:00

Sorry for the late reply.

1. is the version of PowerPath that you have from DELL? If so, this is probably called "Basic" and this is a different version that the EMC version called base.

DELL POwerPath has different rules for NIC's supported and number of paths per NIC and the load balancing - you'll need to check with DELL about this.

EMC PowerPath Base allows one NIC and two paths. Powerpath full allows two NIC and four paths (iSCSI) and also supports full load balancing.

2. Best Pracitice for iSCSI is listed in emc245445 - you should be using two subnets if you use two NICs - this is both an EMC and VMware recommendation.

On the array use two subnets for each SP - see my post above.

I'm not a ESX person, so am not capable of answering the OS specific questions.

glen

16 Posts

January 25th, 2011 14:00

Well we did get our PowerPath originally when the DELL engineer came onsite to deploy our SAN. However, we have upgraded it since then to a newer version and I'd swear we downloaded it from EMC's site but can't recall. When opening PowerPath on a host and looking in the "about", it displays the following:

PowerPath Administration

EMC Corporation

Version: 5.3  (build 311)

We show 4 paths and only activity on 2 at a time so I would think that means we have redundancy but no load balancing? But we do have an actual licence key for PowerPath to be installed so I assume we have something more than basic version? Full perhaps?

We do use 2 NIC ports and 2 iSCSI subnets.

EDIT:

Also, should we use round robin or Least Queue Depth for our 2008 hosts???

Message was edited by: OHCA

4.5K Posts

January 25th, 2011 14:00

Run the following commands:

powermt check_registration - looking for Capabilities: All

powermt display dev=all - you can copy one of the LUNs into here - I'm looking for the Policy entry - the normal one is calld CLAROpt - this is for load balancing.

Pseudo name=harddisk2
CLARiiON ID=APM000xxxxxxx [xxAdmin1]
Logical device ID=xxxxxxxxxxxxxxxxxxx [LLEFS Data MetaBase]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A

glen

No Events found!

Top