Dell Storage Forum 2012: this is London calling

*This following blog post from Simon Robinson is sponsored by Dell *

I’m delighted to have been asked by Dell to share a few thoughts about storage market trends and how we view Dell’s evolving role within it. Indeed, this is a good time to do so as we begin a New Year and head into Dell’s Storage Forum event in London. This is Dell’s first Storage Forum event outside of the USA, and as a Brit I am happy to share some perspectives from this side of the pond.  

The big theme at Storage Forum is “Fluid by Design.” Those of us who have been following the space for some time are well aware that the storage industry has developed many of the enabling technologies that underpin the cool devices, applications and platforms that are now driving the digital economy. I believe that we are entering perhaps the most exciting period – in terms of disruptive innovation – in the more than ten years that I have been covering the storage space.

I think the notion of ‘fluidity’ in this context is quite apt. The trend that’s driving this latest wave of storage innovation is blindingly obvious; data, lots and lots (and lots) of it. Many of us now won’t leave the home without at least one ‘data generating’ device – tablet computer, smartphone or high-definition camera – and we all know the data volumes these devices can generate are colossal. And the beauty of the Cloud means that it’s oh-so-easy to ‘fluidly’ store this data online and share it across multiple devices, as well as with friends and relatives worldwide.

However, although the ease with which we can share data via the Internet has revolutionised the way we communicate, it has helped to create a perception that the storage of all this data is somehow ‘free.’

Those who work in the trenches of IT know that this is not the case; far from it. A multi-terabyte hard drive is, incredibly, about the same price as a toaster, but those at the coalface know that plugging multiple of those drives into a storage array in a way that meets the demands of modern enterprise applications and users, while also delivering the required levels of availability and protection, is far from straightforward, or cheap. Then there’s the additional burden of ensuring data storage meets regulatory and governance requirements around retention and preservation. For most organisations, simply deleting data is not an option.

In short, the storage capacity boom that is fuelling the digital revolution is causing businesses some real challenges that we believe are beginning to stretch the capabilities of legacy storage technologies.

And at a time of flat-to-declining IT budgets – not to mention the broader economic uncertainty that we are feeling especially in Europe right now – rampant levels of data growth are placing disproportionate pressure on the storage infrastructure.  

The age-old response of simply applying a band-aid through adding more capacity is often no longer economically viable. Adding further load to the storage system is the fact that virtualization often exacerbates the problem; according to our research, server virtualization and consolidation projects are by some distance the number one driver of storage capacity growth. Server sprawl may have caused headaches for server admins in the pre-virtualization days, but virtual server sprawl is very much a serious issue for storage administrators today.

As a result, IT and storage professionals tell us that their attention in 2012 is going to be concentrated on improving storage utilization and overall efficiency. Rampant data growth is a given; with the usual palliatives out of bounds because of flat budgets, many IT managers are open to a different approach. What’s more, if IT is looking to gain maximum value out of their virtual infrastructure – and all of the fluidity and dynamism that this entails – by virtualizing their more business- and mission-critical applications, then a thorough appraisal of the storage environment is an essential part of the process.

All of which brings me to Dell.  The company may not traditionally be synonymous with enterprise storage, but we believe this perception is changing. Dell has invested a lot of M&A and organic R&D dollars in building a broad, next-generation storage portfolio that it calls Fluid Data Architecture.

Dell’s vision here is of an intelligent, efficient storage infrastructure that is capable of storing, managing and protecting all variants of enterprise data – be it file-, block- or even object-based – based on the changing individual requirements of that data over its lifecycle. Fluid also means being able to elegantly scale over time, reducing the burden of costly and disruptive upgrade cycles and minimising operational overhead, which is now often the greatest cost in the data center. One final but important component of this vision is that it aims to consistently apply both the same user experience and intelligence throughout the infrastructure. This strives to ensure, for example, that deduplicated data doesn’t have to be ‘reinflated’ as it moves between multiple primary, backup and archive systems, or that the same management console can be used for managing multiple systems.

Though this is a multi-year vision, Dell continues to deliver against it. Just this week at Storage Forum, Dell announced the new Dell DR4000 disk-to-disk backup solution which addresses another real and pressing customer challenge. Data deduplication technologies have been popular for a number of years as IT shops move away from tape and towards disk-based backup, and our research indicates that backup-centric dedupe continues to be one of the hottest technologies in storage. It’s good to see Dell enter this market with a system based on its own IP.

Also, Dell announced a major new upgrade to the Compellent Storage Center. The new Dell Compellent Storage Center 6.0 firmware is now built on a 64-bit OS, essentially building in the headroom for the next-generation of Intel processors that promise to boost performance and improve TCO. Version 6.0 also sees improved support for VMware’s vSphere storage APIs for Array Integration (VAAI), and VMware’s Site Recovery Manager (SRM), both of which are designed to help those customers looking to gain more value from their virtual infrastructure investments. Compellent storage systems already provide much of the intelligence that I’ve been talking about – don’t forget these folks pioneered automated tiering; an essential technology when implementing SSDs – so the latest upgrades provide a strong foundation for future capabilities.

Last, but not least, the company also announced the new Dell Solution for Microsoft SharePoint Infrastructure Optimization. Much like virtual servers, SharePoint deployments have a tendency to proliferate like the proverbial rabbits, to the extent that the costs of managing a SharePoint farm can quickly spiral out of control, especially on the storage side. The new offering that bundles Dell’s DX object storage system with Dell/Microsoft services, SharePoint and AvePoint’s DocAve Software Platform should appeal to organizations looking to wrestle back control of their SharePoint farms.

These announcements help to underscore that Dell is determined to be a significant player in the storage market. While it remains one of the most competitive sectors within IT – which is why I so love covering this space – the growing challenges associated with data growth, increased demands from users and applications, and tight budgets are opening up opportunities for companies with a different approach. In this respect we believe Dell’s evolving Fluid Data strategy merits attention.


Simon Robinson is Research Director for the Storage & Information Management practice at 451 Research, an industry analyst and data firm that specializes in emerging technologies in enterprise IT. For more details visit

About the Author: Simon Robinson