Unsolved
This post is more than 5 years old
1 Rookie
•
5 Posts
1
10190
January 16th, 2012 06:00
Queue Depth and SCSI
I have a question I'm unsure if I understand this correctly. From what I've read, SCSI command can stand at 64 outstanding, but Queue Depth can be any value with EMC recommending 256 on Windows. If SCSI limits this to 64, what's the point in setting Queue Depth (Or execution throttle on QLogic) to anything higher? Are they not related?
0 events found
No Events found!


Storagesavvy
474 Posts
0
January 16th, 2012 10:00
Queue Depth and Execution throttle relate to different queues and both can be set in the HBA..
As an example, a CX4 or VNX array has a Queue for each front end port, and a separate queue for each LUN. The Queue Depth setting limits the number of outstanding IOs that the host can issue to a specific LUN. The Execution throttle limits the number of outstanding IO’s the host can issue to a specific front end port. Execution Throttle is generally higher since the port queue is deeper, and several LUNs are being accessed through the same port typically.
For reference, the port queue depth on a CX4 or VNX is effectively 1600 (per port) and the LUN queue is dependent on the RAID type and number of disks for the LUN.
Richard J Anderson
StunningSteve1
1 Rookie
•
5 Posts
0
January 24th, 2012 12:00
Glen,
I would considered this resolved, but when I search for emc204523, "Primus emc204523", "Support Solution emc204523" I get nothing. What am I missing when trying to search for this?
kelleg
6 Operator
•
4.5K Posts
0
January 24th, 2012 12:00
Also see KknowledgeBase article (Support Solution) emc204523 for more information about Queue Depth.
glen
Was your question answered? If so, could you please mark the question Answered and award points to the person providing the correct answer?
glen
Message was edited by: Glen Kelley
kelleg
6 Operator
•
4.5K Posts
2
January 24th, 2012 15:00
that's weird - I can't see it either - I've copied it below:
The following is a Primus(R) eServer solution:
ID: emc204523
Domain: EMC1
Solution Class: 3.X Compatibility
Goal What is the cause of high Average Busy Queue Lengths (ABQL) for CLARiiON LUNs and ports?
Goal Why are there queue full (QFULL) issues on the SP ports?
Goal Why are hosts seeing high queuing (QFULL) on array?
Goal How do I view the Queue Full data in Analyzer?
Fact Product: CLARiiON
Fact Product: VNX Series
Fact EMC SW: Navisphere Analyzer
Fact EMC SW: Unisphere
Symptom Performance
Symptom High ABQL on multiple drives can lead to hosts being returned "Queue Full" messages.
Symptom Hosts may experience some timeouts, even if the overall performance of the CLARiiON is OK. The response to a QFULL is HBA dependent, but it typically results in a suspension of activity for more than one second. Though rare, this can have serious consequences on throughput if this happens repeatedly.
Change New hosts may have been connected to the CLARiiON or the HBA settings changed on existing hosts.
Cause The CLARiiON returns a QFULL flow control command under the following conditions:
The HBA execution throttle thresholds on the hosts may be set at too high a value (such as 256).
Fix Reduce the HBA execution throttle thresholds on the hosts to 32 (or lower if a large number of hosts are connected to the CLARiiON).
If using QLogic HBA's use the SANsurfer utility to change the Execution Throttle for each HBA. This can be done on-line. In new versions of SANsurfer, the Execution Throttle is found on the "Advanced HBA Settings" - select a HBA port, then Parameters, then on the "Select Settings section" drop down. The EMC default setting for Execution Throttle is 256 - if this is higher than 256, then change to 256; if the setting is 256 try lowering it to 32.
The same target queue length restrictions apply to all other HBA makes and models. With Emulex, these settings could be changed used HBA Anywhere.
Note Starting in FLARE Release 26, Navisphere Analyzer is now collecting the queue full data on the front-end ports when Analyzer is running. This data can be seen when opening an Analyzer archive (NAR) using Unisphere Analyzer version 30. To see the queue full values, you must select an SP Port in the SP tab in Analyzer and then select the "Queue Full Count" value. Please see the EMC CLARiiON Best Practices for Performance and Available Release 30 Firmware Update - Applied Best Practices. See the next section entitled "Queuing, concurrency, queue-full (QFULL)" for a more complete description. There is a problem in Flare release 26, 28 and 29 where the Queue Full data in the NAR files is incorrect, this has been fixed in Release 26 patch 32, Release 28 patch 708 and Release 29 patch 012.
Queuing, concurrency, queue-full (QFULL)
A high degree of request concurrency is usually desirable, and results in good resource utilization. However, if a storage system’s queues become full, it will respond with a queue-full (QFULL) flow control command. The VNX and CX4 front-end port drivers return a QFULL status command under two conditions:
The host response to a QFULL is HBA-dependent, but it typically results in a suspension of activity for more than one second. Though rare, this can have serious consequences on throughput if this happens repeatedly. The best practices 1600 port queue limit allows for ample burst margin. In most installations, the maximum load can be determined by summing the possible loads for each HBA accessing the port and adjusting the HBA LUN settings appropriately. (Some operating system drivers permit limiting the HBA concurrency on a global level regardless of the individual LUN settings.) In complex systems that are comprised of many hosts, HBA’s, LUNs, and paths, it may be difficult to compute the worst-case load scenario (which may never occur in production anyway). In this case, use the default settings on the HBA and if QFULL is suspected, use Unisphere Analyzer (release 30 or later) to determine if the storage system’s front-end port queues are full by following the steps described below. Information on how to use Unisphere Analyzer can be found in the Unisphere on-line HELP. HBA queue depth settings usually eliminate the possibility of LUN generated QFULL. For instance, a RAID 5 4+1 device would require 88 parallel requests ((14*4) + 32) before the port would issue QFULL. If the HBA queue-depth setting is 32, then the limit may never be reached. However, if there are multiple paths to a LUN, then the maximum queue depth for all HBA paths in total, may be higher than the LUN maximum queue depth. RAID 1 (or RAID 1/0 (1+1)) is the most likely RAID type to encounter a queue full issue. For example, if the HBA queue-depth default setting was altered to a larger value (such as 64) to support greater concurrency for large metaLUNs owned by the same host, the RAID 1 device could reach queue-full because its limit is 46 requests(1*14)+32).
QFULL is never generated as a result of a drive’s queue-depth.
Port statistics are collected for Navisphere Analyzer; this data includes several useful statistics for each individual port on the SP:
Note For host specific help with setting the queue depth, please see the following articles:
VMware ESX: emc274169
Windows: emc209302
Linux: emc90132
Note The Average Busy Queue Length (ABQL) is one of the "Advanced" characteristics, which can be seen in Navisphere Analyzer and is defined as follows:
The average number of requests waiting at a busy system component to be serviced, including the request that is currently in service.
Since this queue length is counted only when the SP is not idle, the value indicates the frequency variation (burst frequency) of incoming requests. The higher the value, the bigger the burst and the longer the average response time at this component. In contrast to this metric, the average queue length does also include idle periods when no requests are pending. If you have 50% of the time just one outstanding request, and the other 50% the SP is idle, the average busy queue length will be 1. The average queue length however, will be ½.
Note If Advanced characteristics, such as ABQL, cannot been seen in Navisphere Analyzer, this can be changed by changing the following setting:
Select Tools -> Analyzer -> Customize and checking or clearing the Advanced box.
Note In arrays running FLARE Release 29 and later, the Queue Full statistic is now collected when running Analyzer and can be seen when the archive is opened using Analyzer Release 30 by selecting the SP tab and select the SP Ports. See emc218359 for more information.