Highlighted
emarzock1
3 Argentium

How do we fix forced flushing?

I have seen other discussions on this topic, but none that specifically answers the question:  "If I see forced flushing on LUNs on a raid group, how many more disks must I add to eliminate the situation?"

We are seeing significant forced flushing on a LUN.    During the times when the LUN is experiencing forced flushes, I see I/O queue depths up to 9 or 10 across the 8 drives in this R5 raid group.

So, does that mean that 10 IOPS x 8 drives = 80 IOPS are outstanding?  And thus, a single disk added to the Raid Group would be sufficient to eliminate the forced flushes?

Thanks for any insights.

Eric

Labels (1)
0 Kudos
3 Replies
inquiringminds
1 Copper

Re: How do we fix forced flushing?

There are many things you can do to fix forced flushing such as lower the watermarks, add more spindles, upgrade sp hardware for more cache, etc. Rather than go into all the details of each, please see KB article Primus emc186107, which can be viewed via Powerlink, that explains each of the items I mentioned above and other options also and goes into further details, which also provides more information and step-by-step instructions on making adjustments.

0 Kudos
kelleg
5 Tungsten

Re: How do we fix forced flushing?

Eric,

In addition to the Primus referenced in the other reply, please see emc218359 - this is a list of all Primus articles about using Analyzer and the types of performance issues that you may encounter.

glen

0 Kudos
driskollt1
3 Argentium

Re: How do we fix forced flushing?

We have a lot of UNIX servers that love to overload our CLARiiON's cache...

Here are some options.  It's really up to your requirements to determine what is best for you to do.

1.  Move to RAID 10 - Only 6 more disks will get you the same capacity and less write penalty.  This is probably the least expensive option.

2.  Add more spindles to the current R5 RAID Group (or migrate to a MetaLUN that spans multiple RAID Groups). - You'll end up with more spindles but the R10 will probably serve you better

3.  Buy and install Navisphere QOS Manager (NQM) - Limit the # of IOPS that can go to the RAID Group

4.  Disable write cache for the problem LUN - Performance hit on that LUN.  If you don't care about performance on this particular LUN it might be a good idea

5.  Change your cache watermarks.  - This affects the whole array.  Also if you've severely underprovisioned your spindle count for your problem LUN this still might not fix your issue.  I've had Sun M500s able to keep cache at 99% even with high watermakrs of 30%.

Also make sure you have enough write cache allocated.  Don't allocate a bunch to read cache if you don't need it.  On my CX4-960s I usually set between 256-1024 MB of read cache (depends on the type of load the CLARiiON has as to how I set it) and the rest goes to write cache.  If you're using more than like 10% of your cache for read cache you might want to reallocate.

0 Kudos