Start a Conversation

Unsolved

This post is more than 5 years old

1148

October 3rd, 2011 14:00

Slight confused when it comes to number of ports to use per SP with iSCSI

Hi.

We have a new VNX5300 waiting to get configured, and I need to plan out the network infrastructure before the EMC tech arrives. It has 4x1gbit iSCSI per SP (8 ports in total), and I'd like to get the most out of the performance until we jump over to 10gig iSCSI.

From what I can read from the docs - the recommendation is to use only two ports per SP, with 1 active and 1 passive. Why is this? It seems kind of pointless to have quad-port i/o-modules and then recommend to not use more than two of them?

Also - I'm a bit unsure about the zoning. The best practices guide state that you should separate each port on each SP from each other on different logical networks. Does this mean that I have to create 4 logical networks to be able to use all 8 ports?

It also gives the following example:

http://i.imgur.com/VZi1E.jpg

Does this mean that A0 and B0 should sit on the same physical switch aswell? Won't this make all traffic go on one switch (if both A1 and B1 are passive)?

8.6K Posts

October 3rd, 2011 15:00

I would suggest to look at the available reference architectures on Powerlink for your environment, like VMware, Windows, Exchange, ….

115 Posts

October 4th, 2011 00:00

Hi Pauska

                    For fibre you usually connect a0 and b1 to switch1 and a1 and b0 to the second switch. If your using Powerpath the data will be load balanced across all active patchs to the LUN. So if you have a LUN assigned to SPB powerpath can load balance across all b ports to the LUN (b0 and b1). In the zoning it is recommended to use single initiator - single target zoning so in eac zone you would only have 2 www's (for wwwn zoning) so for example on switch 1 you would have a zone 1 with HBA0 wwn and SPA0 wwn and zone 2 would have HBA0 and SPB1 and on switch 2 zone 1 would have HBA1 wwn and SPA1 wwn and zone 2 would have HBA1 and SPB0 wwwn

For iscsi

Follow the best practice guide and create the 4 logical networks and again if your using powerpath this will load balance across the active paths. There are host limits on iscsi ports so you may want to spread your hosts across the ports. It depends on your system but there is a recommended host limit on each port.

maybe if you give a little more detail on what your trying to achieve like what systems are you connecting and what the 5300 will be used for you may get some other guys posting to your reply

i may have misunderstood what you meant but zoning is used in FC not iscsi. you would either have a totally seperate network for iscsi or create vlans to seperate the traffic on the network

15 Posts

October 4th, 2011 00:00

Thanks for the detailed reply.

There are no reference architectures on Powerlink for iSCSI, there are only for FC/FCoE/NFS.

The only relevant dokcumentation I can find is the two I've already referenced in my answer: The techbook "Using EMC VNX Storage with VMware vSphere", and the "EMC Unified Storage Best Practices for Performance and Availability Common Platform and Block Storage 31.0"

Both these uses two ports per SP without reasoning. They also mention very little when it comes to zoning.

15 Posts

October 5th, 2011 03:00

Hi again.

Beagless: Sorry for using the wrong term, I'm used to zoning in previous FC deployments. I meant ofcourse separating iSCSI traffic into separate broadcast networks (vlans).

We ended up with using two ports per SP, else we'd have to create 4 logical networks (and have 4 ports in each host). I'll just reserve the last 4 ports (2 per SP) to later usage.. DMZ or something similar

Thanks for the replies everyone!

4.5K Posts

October 19th, 2011 15:00

On PowerLink look for Knowledgebase article emc245445 - this attempts to give some of the answers you're answer about. In the iSCSI world there's no real best practice, just what others have found that work well.

Just remember that iSCSI is a low cost connection scheme, not like FC. You have to take into consideration the load on the host NIC's, network contention, VLANs, bandwidth, etc.

FC is easier and has better performance due to HBA's. Using NIC's for iSCSI traffic can quickly overload a single 1Gb NIC if you have a lot of connections using that one NIC. Same on the array - too many hosts will overload the SP port (FC also has this problem, but usually handles it better).

glen

No Events found!

Top