odurasler
2 Iron

Queue Full Count in Analyzer

Could someone explain what the Queue Full Count in Analyzer means?  I have a CX4-480, and when i look at the Queue Full Count metrics in Analyzer I see spikes all the way up to 680. I have read the "Queuing, concurrency, queue-full (QFULL)" section of the Clariion Best Practices Performance Availability so I have a general idea of what a QFULL is, but I'm not sure how to read the QFC metric. Does this mean that my system is having no issues since it is not above 1600?

Thanks!

Tags (2)
0 Kudos
8 Replies
avs
2 Iron

Re: Queue Full Count in Analyzer

Hello

You can also get same QFULL event from front-end port driver when LUN queue exceed (14*(the number of data drives in the LUN)) + 32

It's probably because of non default HBAs' queue depth on hosts

680 QFULL events is an issue and could significantly decrease host performance

Alex

0 Kudos
odurasler
2 Iron

Re: Queue Full Count in Analyzer

Thanks Alex for responding.

So are you indicating that the peaks to 680 in Analyzer is bad? Hmmm...i may beg to differ.

0 Kudos
kelleg
4 Ruthenium

Re: Queue Full Count in Analyzer

Also see Support Solution (Primus) emc204523

glen

0 Kudos
Highlighted
avs
2 Iron

Re: Queue Full Count in Analyzer

It's definitely bad.

In right solution you shoud see 0 QFULL event.

Alex

odurasler
2 Iron

Re: Queue Full Count in Analyzer

Glen,

do you know what metric to look at in analyzer to figure out what the concurrent IO is?

"The total number of concurrent I/O requests on the Front-End FC port is greater than 1600"

0 Kudos
kelleg
4 Ruthenium

Re: Queue Full Count in Analyzer

Determining concurrent IO's is not really possible with Analyzer, just that when the number is reached, you get the Queue Full. I believe that engineering may have something that would collect this data, but it would not be available outside of a our lab

glen

0 Kudos
odurasler
2 Iron

Re: Queue Full Count in Analyzer

Just an update...

I was able to remedy the Queue Full by changing the Execution Throttle on Windows physical server from 265 to 64. After taking Analyzer logs after the change, I noticed that QF's weren't generating.  However, I'm still running into Q-IO issue within PowerPath on that physical server. It seems IO still get queued up on server.

Does it make sense to keep playing with Execution Throttle? For example, I only have 2 separate RG with 4+1 configuration, so the maximum ET I can set is 88? Or will increasing spindle count elimate queued IO's on the host side?

0 Kudos

Re: Queue Full Count in Analyzer

If you are no longer experiencing QFULL’s, then there is no need to further adjust Execution Throttle. For queue depth of individual LUNs you’d adjust the QueueDepth on the HBA instead. However, since you aren’t seeing QFULL’s, that step is not necessary either. Exceeding the LUN Queue Depth (88 for a RAID5 4+1) will result in QFULL’s at the port as well, so it does not sound like you are exceeding 88.

Queued IO’s on the PowerPath paths indicates that the host is issuing IOs faster than the array can service them. Typically this is caused by the disks being saturated. Adding more spindles would help, or changing RAID type to RAID10, possibly. In Analyzer, you can look at the IO’s and Utilization for individual disks in a RAID Group, which should give you an idea of whether there is a bottleneck there.

If your application is performing well, having some queued IO’s in the path is not necessarily a bad thing. Some workloads stack IO’s in the queue to get more concurrency which can increase IOPS. The primary question is whether you have a perceived performance problem or not. If not, then further tuning may not be necessary.

Richard J Anderson

0 Kudos