The Tracks of my Tiers

Ever since Tom Georgens mentioned on NetApp’s earnings call that “I think the entire concept of tiering is dying,” there have been a host of articles discussing what he meant by that and weighing in with dissent or agreement.

Some great examples are (and more are linked from these blogs as well):

In addition, Chris Mellor wrote an article about Avere recently – and of course, their claim to fame is a scale-out caching layer (but they’re leveraging automated tiering, as well).

This is hot a topic, no doubt about that. I’ve spent the last 12+ months talking to a wide variety of our customers and ultimately tiering, or more appropriately, maximizing performance and reducing costs, while maintaining simplicity, is invariably top of mind.

I won’t speak to the effectiveness of block-based automated tiering strategies, nor enter the fray as to whether a pure caching approach is effective enough to capture every possible performance oriented scenario. What customers have been communicating to me, in many different ways, is that there are a multitude of different environments out there – all of which demand a slightly different approach to maximizing the performance, while simultaneously reducing the cost. The other key point customers have made is that they don’t want additional complexity in their environments. They don’t want to give up their enterprise features, their ability to support NFS and CIFS, or their ability to scale a single system to large amounts of performance and data without adding management overhead.

The ideal storage system will provide customers the ability to maximize their particular workflow, on a per-application, per-LUN, per-file, and per-directory basis. Storage systems need to be flexible enough to provide not only automated options, but also manual/specific workflow oriented options as well.

A great example of this (although slightly orthogonal) is our recent ability to put meta-data purely on SSD, spill it over as necessary, etc. This is not something that can be accomplished with either caching or automated tiering – since it ultimately cannot be predicted a priori by the system – rather it is input that the storage administrator and application architect must uniquely provide.

There are a host of other potential options – file types that have specific access patterns, directories that contain virtual machines, LUNs which are owned by Exchange 2010, etc. Clearly, any such system would minimize the amount of work required, not only in defining and applying such capabilities, but also in presenting a single file system, single namespace and single point of management to the users.

Is that tiering? I’m not sure that it is. That might just be the definition of a next-generation filesystem.

About the Author: Nick Kirsch