Start a Conversation

Unsolved

This post is more than 5 years old

29944

February 11th, 2014 10:00

Queue Depth of FA Port

I've searched and searched and can't find any answer.

Does anyone know how you can find the queue depth for certain frame FA Ports? I've seen websites saying typically it is 4096 / port, on here I saw someone reference a cx-4 as having 1600 as a queue depth. Just trying to find if you can actually tell. If there is any command or vendor documentation that lists this number.

Thanks!

24 Posts

February 11th, 2014 12:00

What platform?  And are you looking for the actual queue depth or the maximum queue depth?  For Vmax, you can find the actual queue depths within the Performance section, FE Director metrics, select ALL metrics instead of KPI's, and you'll see the Queue Depth ranges and Avg queue depth ranges.  The ranges represent a range of IO's queued. As an IO enters the queue it first checks how deep the queue is. Based on depth, the applicable queue depth bucket increments with the value seen by the IO. For example, an IO that encounters a queue depth of 7 will increment bucket #2 (depth 5-9 for OS or 7-14 for MF) by 7. The intent of these buckets is to identify IO bursts which in turn generate large queues and long response times.

33 Posts

February 11th, 2014 12:00

Looking for the maximum on a Fibre Port

33 Posts

February 11th, 2014 12:00

If I know that max of the FA port then that will help aid in the host setting queue depth.

24 Posts

February 11th, 2014 12:00

Can I ask what's driving the question?

33 Posts

February 11th, 2014 12:00

on a VMAX 20k

24 Posts

February 11th, 2014 12:00

What platform?  But more importantly, what's driving the question?

24 Posts

February 11th, 2014 13:00

From the "smart" people

33 Posts

February 11th, 2014 13:00

Symmetrix - QFULL limit

They weren't able to produce the document either

33 Posts

February 11th, 2014 13:00

This is great info, thanks!!

Where did you get maximum queue records per VMAX FA?

24 Posts

February 11th, 2014 13:00

Well, technically, The DMX and VMAX both support a maximum of 12,288 queue records per FA Slice/CPU. 2 FA Ports share these queues. Enginuity limits the number that any single device can use. Each LUN will be guaranteed at least 32, but can dynamically borrow as much as 384. An FA though, is just a pathway for your data down to your volume/LUN. So, 384 for volume QD is possible (we can borrow QRECs from non busy vols) but it doesn't necessarily mean you'll ever see such deep QD... it doesn't even make sense to queue so much data down a volume not to mention the HW needed to be able to make use of such deep queue.

Example: you would need a LUN QD of 256 to keep a 32way Raid5 7+1 striped meta busy on each single spindle (provided your I/O pattern is random and with enough threads).

With that said, I would refer to the EMC Host Connectivity Guide for your OS of choice.  Those documents will contain the BP setups for that particular operating system. There are also documents from Emulex and Qlogic that go into queue depth settings in more detail.  Setting queue depth properly many times requires trial and error for an environment as well.  However, below is an excerpt from the White Paper docu6351 "Host-Connectivity-with-Emulex-Fibre-Channel-Host-Bus-Adapters-(HBAs)-and-Converged-Network-Adapters-(CNAs)-in-the-Windows-Environment."

In order to avoid overloading the storage array's ports, you can

calculate the maximum queue depth using a combination of the

number of initiators per storage port and the number of LUNs ESX

uses. Other initiators are likely to be sharing the same SP ports, so

these will also need to have their queue depths limited. The math to

calculate the maximum queue depth is:

QD = Maximum Port Queue Length / (Initiators * LUNs)

For example, there are 4 servers with single HBA ports connected to a

single port on the storage array, with 5 LUNs masked to each server.

The storage port's maximum queue length is 1600 outstanding

commands. This leads to the following queue depth calculation:

HBA Queue Depth = 1600 / (4 * 20)

In this example, the calculated HBA queue depth would be 20. A

certain amount of over-subscription can be tolerated because all

LUNs assigned to the servers are unlikely to be busy at the same

time, especially if additional HBA ports and load balancing software

is used. So in the example above, a queue depth of 32 should not

cause queue full. However, a queue depth value of 256 or higher

could cause performance issues.

I have found success with starting at 32, establishing a baseline performance profile, then adjusting and comparing to the baseline. Raise the QD in small increments. I use increments of 32.  I usually only change the QD's when I can for sure pinpoint that I have a QD problem otherwise you end up spending an enormous amount of time messing with QD's when that has no impact on performance UNTIL you change the value.

Hope that helps!

33 Posts

February 11th, 2014 13:00

So EMC doesn't have this info for the general public?

24 Posts

February 11th, 2014 13:00

Probably located somewhere. Most likely on support.emc.com in the knowledgebase.

33 Posts

February 11th, 2014 14:00

So what is the "Maximum Port Queue Length" for a VMAX? Information given so far, we have the vVNX @ 1600 and we have the best practice host setting on a vmax of 32. But I still do not know what the "Maximum Port Queue Length" is for vmax.

24 Posts

February 11th, 2014 14:00

The calculations are the same but the max QD is different for different platforms.  1600 is for VNX as that quote was taken from a VNX focused white paper. The BP recommendation for Vmax is 32 as the QD starting point per HBA and works well in the majority of environments.

33 Posts

February 11th, 2014 14:00

Looking at the above post, the question goes back to the equation...where do you find the "Maximum Port Queue Length" per storage Port? In that example it is listed as 1600.

No Events found!

Top