I've searched and searched and can't find any answer.
Does anyone know how you can find the queue depth for certain frame FA Ports? I've seen websites saying typically it is 4096 / port, on here I saw someone reference a cx-4 as having 1600 as a queue depth. Just trying to find if you can actually tell. If there is any command or vendor documentation that lists this number.
What platform? And are you looking for the actual queue depth or the maximum queue depth? For Vmax, you can find the actual queue depths within the Performance section, FE Director metrics, select ALL metrics instead of KPI's, and you'll see the Queue Depth ranges and Avg queue depth ranges. The ranges represent a range of IO's queued. As an IO enters the queue it first checks how deep the queue is. Based on depth, the applicable queue depth bucket increments with the value seen by the IO. For example, an IO that encounters a queue depth of 7 will increment bucket #2 (depth 5-9 for OS or 7-14 for MF) by 7. The intent of these buckets is to identify IO bursts which in turn generate large queues and long response times.
Well, technically, The DMX and VMAX both support a maximum of 12,288 queue records per FA Slice/CPU. 2 FA Ports share these queues. Enginuity limits the number that any single device can use. Each LUN will be guaranteed at least 32, but can dynamically borrow as much as 384. An FA though, is just a pathway for your data down to your volume/LUN. So, 384 for volume QD is possible (we can borrow QRECs from non busy vols) but it doesn't necessarily mean you'll ever see such deep QD... it doesn't even make sense to queue so much data down a volume not to mention the HW needed to be able to make use of such deep queue.
Example: you would need a LUN QD of 256 to keep a 32way Raid5 7+1 striped meta busy on each single spindle (provided your I/O pattern is random and with enough threads).
With that said, I would refer to the EMC Host Connectivity Guide for your OS of choice. Those documents will contain the BP setups for that particular operating system. There are also documents from Emulex and Qlogic that go into queue depth settings in more detail. Setting queue depth properly many times requires trial and error for an environment as well. However, below is an excerpt from the White Paper docu6351 "Host-Connectivity-with-Emulex-Fibre-Channel-Host-Bus-Adapters-(HBAs)-and-Converged-Network-Adapters-(CNAs)-in-the-Windows-Environment."
In order to avoid overloading the storage array's ports, you can
calculate the maximum queue depth using a combination of the
number of initiators per storage port and the number of LUNs ESX
uses. Other initiators are likely to be sharing the same SP ports, so
these will also need to have their queue depths limited. The math to
calculate the maximum queue depth is:
QD = Maximum Port Queue Length / (Initiators * LUNs)
For example, there are 4 servers with single HBA ports connected to a
single port on the storage array, with 5 LUNs masked to each server.
The storage port's maximum queue length is 1600 outstanding
commands. This leads to the following queue depth calculation:
HBA Queue Depth = 1600 / (4 * 20)
In this example, the calculated HBA queue depth would be 20. A
certain amount of over-subscription can be tolerated because all
LUNs assigned to the servers are unlikely to be busy at the same
time, especially if additional HBA ports and load balancing software
is used. So in the example above, a queue depth of 32 should not
cause queue full. However, a queue depth value of 256 or higher
could cause performance issues.
I have found success with starting at 32, establishing a baseline performance profile, then adjusting and comparing to the baseline. Raise the QD in small increments. I use increments of 32. I usually only change the QD's when I can for sure pinpoint that I have a QD problem otherwise you end up spending an enormous amount of time messing with QD's when that has no impact on performance UNTIL you change the value.
Hope that helps!