Start a Conversation

Unsolved

This post is more than 5 years old

910

April 16th, 2013 12:00

Is it possible to go too wide for backend striping and disks groups?


I currently have 2 disk groups that are comprised of 460 300GB FC drives each.  I split each disk group into TDATS and combined them into two separate pools.  So essentially each pool is comprised of having 460 drives.  Would of it had been better to create 4 pools utilizing only 230 drives in the backend?  Is there a point where the amount of drives utilized behind a pool do not calculate a one for one when you are trying to figure out the total IOPS a pool can deliver?

50 Posts

April 22nd, 2013 08:00

We have been told go wide.  Our FC tier currently has 542 10K 600GB drives in R1 and we are about to add more.  I'd be interested to hear if there is a bottleneck from going "too wide".

50 Posts

April 22nd, 2013 08:00

How many FE's are you using?  If you're able to use more FE's for your hosts have you tried that?  We give each host 8 paths (one to each director in our 4 engine config).

14 Posts

April 22nd, 2013 08:00

2 : )  You are correct by the way, that did help as we did add more.. on our VMAX configurations (4-ENG) we give a 4 FE port group.. one per eng.  The problem app for this discussion we dedicated to one of our older DMX-4's and gave a single pair.  Last week we added a second pair so the host has 4 paths.  (write response times went from 700ms to about 150ms).. Now for the disks.. they are a little all over the place.. ranging from 22% busy - 14%.  I am seeing the first 200 drives are so between 22 - 20 then it drops off a little  and then more towards the last 100 drives.  As the app pushes more I/O I will keep an eye on these numbers.  On our smaller drive pools on our VMAX arrays I see a little more consistency.. which is what prompted my original question.

14 Posts

April 22nd, 2013 08:00

So far the only bottleneck we hit thus far is on the FE's because the provisioner decided to use 192 small (30gb) thin devices for an application that is doing 95% writes (small I/Os). (1000-2000 IOPS per server)  Are FE's are hitting 80%+ before our back end disks even get past 14% utilized.  Right now we are trying to increase throughput.  It appears though that all the drives on the backend are performing equally which is a good sign.  I will keep you posted.. right now we have too many small IOPS and no throughput to really hammer this configuration.

50 Posts

April 22nd, 2013 10:00

Are these really small luns for this app?  I actually just read this post which you may find interesting about how data is written to disks in a thin pool etc.

how a thin striped meta is layed out on a thin pool

14 Posts

April 22nd, 2013 11:00


yeah i read that article awhile back when i was looking for a performance issue on another topic.  We are not using meta's with this app just a ton of 30gb single TDEVs.  I have seen results varying in response times from either going too small or too big when it comes to right sizing devices.  I wish EMC would publish better guidelines on how to size devices for certain workload I/O patterns.

No Events found!

Top