Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

3370

December 15th, 2014 02:00

Write Pending too high

During a Database restore activity we found our cache utilization go very high almost to about almost 98%.
Later we were able to sort it out by implementing a host limit on the server.
Now we noticed one of the LUN having a very high Write pending limit.
Capture1.JPG.jpg


Notice the LUN highlighted 0892 its bound to sata Pool having only 63.2 IOPs and Notice the Write Pending Go up to 2.8Lacs.
Can anyone explain what actually happened here.?

How can 63 IOPS lead tyo 2.8 Lacs WP ?

1.3K Posts

December 16th, 2014 09:00

Too much FE write workload for the BE resources.  Increase the BE capability to destage (add disks, DAs, change RAID, faster disks,etc)

If it is the SATA pool that is overloaded, you can use DCP to fence the writes to that pool, so other writes going to EFD or FC don't suffer as well.

You could also apply host IO limits if you know what SGs are overwhelming the backend.

32 Posts

December 16th, 2014 06:00

Cache Size (Mirrored)                :  240640 (MB)

# of Available Cache Slots           : 3282344

Max # of System Write Pending Slots  : 2461758

Max # of DA Write Pending Slots      :       0

Max # of Device Write Pending Slots  :  123087

Replication Cache Usage (Percent)    :       0

Max # of Replication Cache Slots     :  461579


I understand that but check the IOs on that device its just 63 and check the WP for that Device...doesn't seem to add up.

1.3K Posts

December 16th, 2014 06:00

Looks like a lot of other devices had high WP counts as well.  How many slots can be WP?  Per device and system?  You can get this from a symcfg list -v

Generally high WP counts are because the back-end cannot keep up with front-end writes. 

1.3K Posts

December 16th, 2014 08:00

Is that device a meta volume?  The count (123,087) is per member.

1.3K Posts

December 16th, 2014 08:00

So that is only about 23,000 per per member.

I don't see the IO size, maybe they are 1MB IOs which might lead to more WP tracks, or maybe the IO rate was higher before, leading to high WP counts. 

Again, if you have high WP counts, the BE can't keep up with the FE, but it could be other devices helping cause the BE issue too.

32 Posts

December 16th, 2014 08:00

Yes its a 12 member meta dev.
Still WP going high for that device with just 63 IOs doesn't make sense.

WP can go high only in case of intensive IOs. Writes to be precise. How is it that just 63 IOS are leading to a WP of almost 2.8 Lacs

32 Posts

December 16th, 2014 09:00

So is it a case of queuing at the BE ?
How to rectify it?
Because we had a serious issue with the cache utilization went upto 99%?

December 22nd, 2014 23:00

have you created and enabled  DSE pool in your environment ? As DSE pool helps in lowering down the WP limit ..

859 Posts

December 23rd, 2014 01:00

DSE pool is helpful in case there is a  not well planned SRDF. it does not help in each high WP case.

regards,

Saurabh Rohilla

No Events found!

Top