Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2678

March 23rd, 2017 22:00

Host to FE ports limitation for VMAX 250F

Hi

We have an upcoming implementation for VMAX 250F to be connected to 4 HP and 4 Dell Servers, all for ERP. The VMAX 250F has 2 IO modules per director. The plan is to use 4 ports per director (2 ports per slot), a total 8 ports to be zoned to allservers. All 8 ports will be on the same port group. Is there any limitations on how many FE ports can be zoned to a server?

Thank you!

121 Posts

March 24th, 2017 06:00

The main limits that apply here are:

  • 32 ports maximum per Port Group
  • 512 maximum HBAs zoned to each VMAX port
  • 4096 maximum TDEVs mapped per VMAX ports

It looks like you're fine with the config you're proposing.

5 Practitioner

 • 

274.2K Posts

March 25th, 2017 03:00

Thank you! I appreciate it much!

2 Posts

March 29th, 2017 07:00

Ricky,

I know you got the answers that you were looking for but I am in a similar boat.  My 250F only have 1x4port 16Gbps IO module per director.  So total of 8 ports.  EMCs recommendation has been to use only a total of 4 ports (2 per director), not all 8.

They have not provided any BP guides.  I have also found a VMware based guide which backs this up.  We'll connect all but still uncertain if we follow the recommendations and only use 4.  The recommendation is being push for simplicity and the 4k (4096) TDEV maximums which we'll never reach was mentioned.

So far I have only been through Using EMC VMAX Storage in VMware vSphere Environments (h2529-vmware-esx-svr-w-symmetrix-wp-ldv.pdf) where I see the recommendations.  Did you find any BP on using more than 4 ports?

side note:  If you boot from SAN and use LUN 0 look out for the ACLX device using that LUN on the lowest port on 1d. You'll never be able to set that lun id in unisphere if the ACLX port is in your FA port group.  You can force it via symcli.

522 Posts

March 29th, 2017 12:00

I have been directing customers to use 4 as well rawbytes - based on the docs you mention (possibly the ESX connectivity guide or the VMAX techbook has something about this in it as well I think too). In the old days, it used to be 2 or 4 FA's and has moved to 4 for more concurrency, but what I usually talk about is while we are using groups of 4 FA's, to monitor the load and if they are giant ESX clusters, possibly spread them across groups if applicable. I don't think there is really any other reason to go higher and the downside to going higher could be path limitations (I have hit these on XtremIO in some cases where people follow the out of sight out of mind practice with 8 paths across all SC's). So because that has bit me, I stick with the game plan of using 4 to start and monitoring to either add more or spread the load.

I realize it is not the answer you are looking for with a concrete doc to prove it, but figure there might be solace in others approaching it the same way and it being in line with what the EMC SA's are saying. Let me see if I can find the doc I am talking about above and include a snippet.

522 Posts

March 29th, 2017 13:00

3-29-2017 3-57-30 PM.png

figures I find the snippet after I post - let me know if that helps - this is from the VMAX/ESX Techbook

2 Posts

March 29th, 2017 13:00

Thanks echolaughmk. That is exactly where I confirmed what they were saying.  My only issue is that this doc if VMware based (which I'll follow in our VM environment) but is this the best practice for all open systems, such as windows and linux?

Have you found any similar docs other open systems?

522 Posts

March 30th, 2017 08:00

Nah - your point is valid for the other OS's - I haven't seen anything for anything outside of VMware. For other OS's, I tend to look at the load or current port utilization they have today and make a call on more than 2 (with it never being less than 2 for redundancy). I have some AIX hosts that have 8 FA's+ and then I have some Windows with only 2 FA's total - so it tends to be our industry's favorite answer: "it depends" . I find for most things the tiering looks like this:

Low I/O or Application profile/classification - 2 FA's

Medium I/O or Application profile/classification - 2-4 FA's, with 2 being OK most of the time

High I/O or Application profile/classification - 4+ FA's, with 4 being the choice most of the time (ESX falls into this cause it usually has a ton of servers all seeing a ton of the same datastores among other reasons). Some massive hosts or environments can command much more FA's in certain circumstances so this is where it could "depend"

hope that helps. I'll go back and look at a few docs to see if there are any updates I am missing, but last time I re-read them, I don't think I saw a similar snippet for anything outside of VMware.

thanks!

No Events found!

Top