Many organizations develop their infrastructure vision with a fresh approach based on current technologies or by using vendor recommendations. As time goes by and the storage architecture is stressed, these organizations find performance constraints which were not evident earlier. Some begin to wonder if they selected the correct storage platform.
As organizations move toward Big Data, virtualization, or cloud computing, pressure to control budgets and project cost can negatively affect the innovation that was originally promised. Add in a new strategic project that just became a priority and all of a sudden you find yourself stuck from all angles; the business thinks they gave you everything you needed and limit changes to scope/budget, senior leaders think you didn’t plan well, and procurement processes—as usual—are time consuming. Meanwhile, the IT architecture deteriorates at a much faster pace, cannot scale non-disruptively, and you are now struggling to catch up.
Dwindling IT budgets make things even more complicated, leading to a predictable question: “Why did we not foresee this problem and how did our predictions go wrong?”
In this Knowledge Sharing article, Anil C. Sedha and Tommy Trogden explore many of the challenges faced by storage engineers and offer practical tips for improved design and architecture that meets today’s requirements. They discuss topics such as performance criteria, capacity vs. performance, cost impacts, futureproof architecture, data compression, data replication/protection, cloud computing strategies, and more.