The Summer of Love and the Urge to Converged Infrastructure

summer of loveBefore you get too excited, this is not some tawdry summer romance saga of data center passion and desire.  There’s no need to alert the folks in HR.  But now that I have your attention, what could the Summer of Love of 1967 and today’s shift to Converged Infrastructure possibly have in common?

One commonality is they both represent movements by groups with views that were considered different than those of the “establishment”. They often challenged the status quo and were united by the desire to make things better.  And as their views and ideas began to drive meaningful changes, they became accepted.

I admit the analogy is a bit of a stretch. But another key point they share is that for any major movement to take place, there needs to be a coalition of the willing to drive transformation. There is also often a tipping point that provides the catalyst for moving these transformations into the mainstream.

Hippie Making Peace Sign

For the “hippies” of the 60’s, the tipping point happened when they converged at the Junction of Haight and Ashbury in San Francisco to provide the catalyst for the “Summer of Love”.  I’m not saying 10,000 IT members are about to converge on the VCE HQ in Marlboro MA (although that would be very cool and they do have a very nice Executive Briefing Center). But there is a clear trend of increased interest, with more and more user discussions revolving around Converged Infrastructure One could argue that the tipping point is very close, if not happing right now.

The latest numbers certainly support this movement. IDC estimates that Converged Infrastructure will represent a $3 Billion market by 2018, with a growth rate at over 13%, one of the fastest growing segments in IT infrastructure.  The reasons for why IT is moving to Converged Infrastructure certainly support these growth rates. According to IDC; 4X faster time to deploy, 20X better availability, and 2X more productivity for IT teams.

As many folks know, Converged Infrastructure simplifies technology deployment with standardized, prepackaged blocks. These blocks also have API hooks that make them easy to integrate into a range of applications. Provisioning infrastructure resources is an automated process with a simple workflow. Converged Infrastructure provides for efficient consolidation with the ability to throttle resources to support different workload types. The throttle controls, however, have typically been performed by dialing in CPU cores, amount of memory, and the number of storage GB’s needed by the app. In the world of servers, provisioning and tuning of CPU and memory resources is a highly abstracted and simple process.  Select the number of cores and amount of memory, and away you go.

Provisioning and tuning storage for different workloads, however, can be a fairly manual and cumbersome process to plan, set up, and administer (i.e. managing different FAST policies for specific apps to optimize storage performance). As a result, many admins simplify their approaches and create a single bucket of “one size fits all” storage (i.e. having a single FAST policy and providing apps a single, similar level of performance). This does not mean different policies or multiple configurations can’t be used to control service levels of the storage. But for most admins there is a tradeoff between keeping things as simple as possible versus having a more granular control. Most choose simplicity.

The tipping point for Converged Infrastructure happening today is that the need to make this tradeoff is disappearing.  The ability to provision storage by Service Level Objective removes the complexity of managing different array policies or manually setting up specialized storage configurations. The optimization of storage services for different workload types can now be both simple and predictable. With SLO, it’s possible to efficiently consolidate storage across hundreds to thousands of apps with different workloads. Each app can be individually managed based on its compliance to its service level. New apps can also be easily on-boarded, and SLO’s for existing apps can easily be adjusted with a few simple clicks.

SLO management also provides a critical workload planning service that understands the current load running across the storage.  Admins can monitor the amount of headroom available and the system will automatically calculate the amount of additional storage resources and performance needed to support the SLO for new apps.  The end result is a level of predictability that makes it possible to confidently support a range of workload types, and addresses what we call the “half a cookie conundrum” faced by many IT admins.

If you have young kids, you will understand what I mean. If you give your kid a half of cookiecookie, generally they are happy to get a nice treat.  But if you give your kid a whole cookie, and then take half of that cookie back, you most likely will have an unhappy kid you need to deal with to justify why you took away half their cookie.

It’s a similar situation to what an IT admin goes through with their users and app owners.  The first sets of apps ran great because they had all of the resources to themselves, i.e. they  got the whole cookie. But as apps were added, they had to share the infrastructure, and in their minds, they had to give up some of the cookie they were first given. Provisioning via SLO allows every app to get a consistent, predictable level of service, whether they are the first app to be added, or the last app that fits. It provides a critical capability, solves a real problem, and provides the tipping point for many large scale Converged Infrastructure deployments and consolidations.

The concept is certainly gaining traction. The roll out of the capability is resonating with IT admins, and feedback has been highly positive. As the adoption continues to increase, it’s easy to see we are quickly approaching the tipping to the transformation of Converged Infrastructure becoming the deployment option of choice for an even greater range of applications and workload types.

The American psychologist Timothy Leary summed up the Summer of Love by defining the movement as a generation who joined together to “turn on, tune in, drop out“.  He later defined what that statement meant as; “Turn on” to activate your neural and genetic equipment, “Tune in” to interact with the world around you, and “Drop out” as an active, selective, graceful process of detachment.
hippie vanThe statement might apply to today’s movement to Converged Infrastructure as “Turn on” to activate your urge to converge, “Tune in” to interact with the workloads around you, and “Drop out” as an active, selective, graceful process of detachment from complexity.

See you on the bus to Marlboro.  Peace.

Scott Delandy

About the Author: Scott Delandy

As an advocate for Dell Technologies and its customers, Scott Delandy accelerates technology transformations across operations to exceed business objectives and deliver results. He drives engaging conversations that prioritize client needs within Dell’s vision, technology roadmap and modernization initiatives. As a vital leader of Dell’s Infrastructure Solutions Group, Scott is known for his transparency and inviting team members into real world dialogues to invest in the company’s future. Since 1990, Scott has served EMC/Dell Technologies in numerous roles, building meaningful, sustained relationships across technology areas, including storage infrastructure, disaster recovery, cloud computing, virtualization, next gen apps and containers. Scott holds a Bachelor of Business Administration from the University of Massachusetts, Amherst.