This is a blog excerpt from the May 4th Managed View blog: http://managedview.emc.com/2012/05/transforming-it-is-not-complete-without-application-protection-se...
Have a read - What do you think??
Why is it that every cloud conversation seems to revolve around turning some function into a service?
Well, the whole notion of cloud turns past concepts of IT as a back office function inside out and puts the end users at the forefront and rightfully so. It’s turned into somewhat of a mantra, but when you consider that the concept of running IT–as-a-service represents a fundamental shift in how we now think about traditional IT strategy and operations, it all makes sense.
Run like a company in itself, the goal of the IT department now is to offer an agile set of IT services that offer an alternative to external cloud-based service providers while providing more control and trust.
But, a key factor often missing from the discussion isapplication protection and how to offer it up as a service. We can debate about who has the better snapshot, backup, or remote replication technology, but that misses the point. The technical merits of the underlying infrastructure components aren’t all that relevant; it’s all about management.
Getting it Right
How do I implement application protection as a service?
In general, mapping out a data protection plan—sometimes called a disaster recovery (DR) plan —requires a deep understanding of the business value of the applications, the replication technology, and the implementation costs. It all comes down to balancing business requirements and cost. Because, at the end of the day, whatever the technology deployed, it’s only as effective as the worse-case scenario it can serve.
Looking at the illustration above, on the left we have recovery-point objective (RPO) which is how much data you could theoretically lose in the event of an outage. On the right we have recovery-time objective (RTO) which is the time it takes to restart your environment after an outage. The blue slopes indicate cost. The green slopes indicate where along the RPO/RTO spectrum the technology fits.
All applications do not require zero data loss technologies. As pointed out in a previous post, regardless of technology innovations, the protection or DR strategy is still about recovery time.If money was no object and moving data between distances was not limited by the speed of light, everything would be protected with fully synchronous real-time replication. But, as you can see in the illustration, this approach is expensive and distance-constrained. Therefore, in common practice, only the most critical data is protected with zero data loss technologies.
When moving to service catalogs, organizations often classify groups of applications into multiple tiers or classes (e.g. gold, silver, bronze or class 1, 2, 3). They prioritize the business processes and associated data critical to theorganization and importance in resuming operations in the event of an unplanned outage. Then, each tier is associated with a specific protection technology in a service catalog.
In its simplest form, a service catalog with related protection technology might associate platinum-level service with synchronous replication and gold-level with asynchronous replication, silver with snapshots at 2-hour intervals and bronze with snapshots at 8-hour intervals.
At the top of this stack is the most expensive tier of service. The lower we go, the lower the cost goes at a price of an increased potential for data loss and increased recovery times. Application classifications (platinum versus gold versus silver versus bronze) depend entirely on each individual customer. I know that one very large retailer in the US considers Microsoft Exchange to be mission-critical due to the amount of sales/order workflow built into Microsoft Outlook and Exchange. For some other organizations, silver-level of service for Exchange might be good enough.
Conversely, a more complex service catalog many encompass detailed attributes, schemes, and specifications. Attributes might include performance and availability as well as recovery-point and recovery-time metrics plus a whole host of other qualities such as retention periods (for backups) and security levels. These attributes are defined for each tier (e.g. class 1, 2, 3, and so on).
In the example shown here, the process involved categorizing a customer’s information and applications, and defining a service-level catalog with categories for primary storage, archival storage, backup and recovery, business continuity, and so forth. Costs associated with each category were also created.
The Right Approach
Can you do this yourself?
Sure. Here is how you might do it:
If the idea of creating a service catalog for protecting applications makes sense, you might be interested in leveraging technologies that support this concept. A primary premise of service orchestration and cloud is automation and standardization for efficiency and agility. Therefore, any application protection as a service technology should include:
The Right Attitude
What’s with the attitude?
The analysis and technology selection processes are not new to IT. Business process assessments and product acquisitions have been standard practices in data centers for years. What’s different now is both the emerging technology and how IT applies it. You need a service mentalitymeant to ensure users a level of protection for their applications that gives them the comfort that their work and the organizations’ business processes are safe.
The combination of new technology with new approaches to IT keeps services at the forefront of cloud conversations. Vendors like EMC will bring concepts like application protection as a service to market. But, only you can make it all work with the right attitude.