1 Copper

analyzer advanced characteristics on pool luns

There are several characteristics that aren't available for pool luns as opposed to luns that are carved from standard raid groups.  Forced Flushes and Full Stripe Writes are examples.  I'm not sure why these wouldn't apply to luns from a pool as there are still underlying raid groups and I would expect something like a full stripe write to be possible.  Can someone please explain?  Thanks.

Tags (3)
0 Kudos
1 Reply
2 Bronze

Re: analyzer advanced characteristics on pool luns

Pools use slices of 1GB that each map to a private LUN in a private RAID Group. There is no mechanism to show how a pool LUN (host facing) maps to what slices from what private elements within the pool. The premise is that all host facing pool LUNs will comprise of slices that are as evenly distributed within the pool as possible (we know rebalancing does not occur yet when expanding the pool - check out the roadmpa for that feature coming).

The statistics you refer to specifically are available using an undocumented switch with Analyzer archivedump to csv format. This will be covered in some detail in the forthcoming EMC World Analyzer hands-On lab that hsould be available to USPEED folks to help accounts with inquiring minds after the event.

So what is valuable here? Well, Forced Flushing within the pool is something worth looking at, especially if write cache is showing saturation and you might have multiple pools setup so knowing which one is mostly contributing to that would be useful. As previoulsy mentioned though, specific host LUN to private RAID Group mapping isn't available so if you identified a specific pool with more Forced Flushing than others, and host LUN(s) in that pool with high write rates, you could initially turn off write cacheing for that pool or look to migrate heavy writers ot other resources.

Other statistics for private RAID Groups and Private LUns would be useful in creating charts of per tier IOPs that would also show potential hot spots in the pool that could be down to high skew (locality in workload within a few GB of LBA space), or even faulty drives running slow.

I expect enhancements to the Analyzer interface and/or documentation to cover aspects of pool analysis to a greater level as a future deliverable so I suggest checking the roadmap periodically and/or keep close to your USPEED contacts as they'll know soon after we know.

In the meantime you can look at disk stats to see the distribution of IO using the regular Analyzer UI and that can give you an idea of how balanced IO is being handled across drives within each tier in a pool, but agreed, pool private LUN/RAID Group cache stats aren't visible.

0 Kudos