mb_live
2 Iron

Queue Depth of FA Port

I've searched and searched and can't find any answer.

Does anyone know how you can find the queue depth for certain frame FA Ports? I've seen websites saying typically it is 4096 / port, on here I saw someone reference a cx-4 as having 1600 as a queue depth. Just trying to find if you can actually tell. If there is any command or vendor documentation that lists this number.

Thanks!

0 Kudos
17 Replies
SCI2
2 Bronze

Re: Queue Depth of FA Port

What platform?  And are you looking for the actual queue depth or the maximum queue depth?  For Vmax, you can find the actual queue depths within the Performance section, FE Director metrics, select ALL metrics instead of KPI's, and you'll see the Queue Depth ranges and Avg queue depth ranges.  The ranges represent a range of IO's queued. As an IO enters the queue it first checks how deep the queue is. Based on depth, the applicable queue depth bucket increments with the value seen by the IO. For example, an IO that encounters a queue depth of 7 will increment bucket #2 (depth 5-9 for OS or 7-14 for MF) by 7. The intent of these buckets is to identify IO bursts which in turn generate large queues and long response times.

0 Kudos
mb_live
2 Iron

Re: Queue Depth of FA Port

Looking for the maximum on a Fibre Port

0 Kudos
SCI2
2 Bronze

Re: Queue Depth of FA Port

What platform?  But more importantly, what's driving the question?

0 Kudos
mb_live
2 Iron

Re: Queue Depth of FA Port

on a VMAX 20k

0 Kudos
SCI2
2 Bronze

Re: Queue Depth of FA Port

Can I ask what's driving the question?

0 Kudos
mb_live
2 Iron

Re: Queue Depth of FA Port

If I know that max of the FA port then that will help aid in the host setting queue depth.

0 Kudos
SCI2
2 Bronze

Re: Queue Depth of FA Port

Well, technically, The DMX and VMAX both support a maximum of 12,288 queue records per FA Slice/CPU. 2 FA Ports share these queues. Enginuity limits the number that any single device can use. Each LUN will be guaranteed at least 32, but can dynamically borrow as much as 384. An FA though, is just a pathway for your data down to your volume/LUN. So, 384 for volume QD is possible (we can borrow QRECs from non busy vols) but it doesn't necessarily mean you'll ever see such deep QD... it doesn't even make sense to queue so much data down a volume not to mention the HW needed to be able to make use of such deep queue.

Example: you would need a LUN QD of 256 to keep a 32way Raid5 7+1 striped meta busy on each single spindle (provided your I/O pattern is random and with enough threads).

With that said, I would refer to the EMC Host Connectivity Guide for your OS of choice.  Those documents will contain the BP setups for that particular operating system. There are also documents from Emulex and Qlogic that go into queue depth settings in more detail.  Setting queue depth properly many times requires trial and error for an environment as well.  However, below is an excerpt from the White Paper docu6351 "Host-Connectivity-with-Emulex-Fibre-Channel-Host-Bus-Adapters-(HBAs)-and-Converged-Network-Adapters-(CNAs)-in-the-Windows-Environment."

In order to avoid overloading the storage array's ports, you can

calculate the maximum queue depth using a combination of the

number of initiators per storage port and the number of LUNs ESX

uses. Other initiators are likely to be sharing the same SP ports, so

these will also need to have their queue depths limited. The math to

calculate the maximum queue depth is:

QD = Maximum Port Queue Length / (Initiators * LUNs)

For example, there are 4 servers with single HBA ports connected to a

single port on the storage array, with 5 LUNs masked to each server.

The storage port's maximum queue length is 1600 outstanding

commands. This leads to the following queue depth calculation:

HBA Queue Depth = 1600 / (4 * 20)

In this example, the calculated HBA queue depth would be 20. A

certain amount of over-subscription can be tolerated because all

LUNs assigned to the servers are unlikely to be busy at the same

time, especially if additional HBA ports and load balancing software

is used. So in the example above, a queue depth of 32 should not

cause queue full. However, a queue depth value of 256 or higher

could cause performance issues.

I have found success with starting at 32, establishing a baseline performance profile, then adjusting and comparing to the baseline. Raise the QD in small increments. I use increments of 32.  I usually only change the QD's when I can for sure pinpoint that I have a QD problem otherwise you end up spending an enormous amount of time messing with QD's when that has no impact on performance UNTIL you change the value.

Hope that helps!

mb_live
2 Iron

Re: Queue Depth of FA Port

This is great info, thanks!!

Where did you get maximum queue records per VMAX FA?

0 Kudos
SCI2
2 Bronze

Re: Queue Depth of FA Port

From the "smart" people

0 Kudos