Our company is in the middle of benchmark testing with IBM and they are asking if they can change the disk queue depth level from 16 (EMC recommendation) to 128 (IBM recommendation). The hosts are a mix of physical IBM AIX 5.3 and 6.1 as well as virtual hosts connecting to DMX4, and VMAX / VMAXe.
I am not familiar with what this setting really affects and will be researching it, but my boss would like an answer from my team asap hence the question to the forums. Switch fabric is Cisco 9513's and Cisco switches in the IBM blades that are ISled into the fabric.
Thanks in advance for any insight or help
I know that on CX and VNX the storage port is the limiting factor (and other brands have this as well). On CX the max is 1600 per port, so if you have 15 of these AIX hosts of yours per port, you're at the max for a Clariion.
on Symmetrix / DMX / V-MAX this is a different story and AFAIK these ports don't have this issue or at least the limit is much higher. This is my experience.
Does anyone else have more insight in this?
We used to generally keep 16 for AIX OS with symmetrix as best practise but definetly if your doing some benchmark exercise for you applications and to improve the number of IOPS you can increase the value to 32 but changing to a higher value like 128 better you should try to test on any test array and see if it dont impact reponse time.
You can refer few links below to understand more on queue depth and how it effects when modified.
I just wanted to know, do we set disk queue depth on servers side OR storage side ? Also what is recommendation level for VMAX storage ? I will appreciate your reply.