Actually I'd argue that is correct because after that deletion the dedupe rate for the other volume is no longer very good
I agree and understand that omething like a dedupe rate is always going fluctuate, but having some way to see how well a particular volume is compressing would be helpful in a context of time. For instance if you have a 6TB volume with 0x dedupe rate that does not require a lot of IO then you are wasting some very expensive storage.
That is the kind of situation that it would be nice to detect especially when you have volumes from virtual servers or other sources that may not be easy to map directly to a single application and potentially have a lot of these devices.
The context of my question is we are trying to see if we can cost justify some of these units based on compression so we have an interest in understanding what compression rates we are getting from different environments.