Unwinding Portfolio Dependency Hell with a DevOps Dependency Fabric

barts blog2One of the more common customer challenges to DevOps is related to portfolio dependency hell. I have listened to a number of exasperated CIOs and IT leaders vent that ‘my portfolio is so complex, and connected, there is no logical starting point for DevOps!’ or the more cynical ’automation is great, that means I can deploy changes that will break production faster.’ In most cases, this isn’t push-back on DevOps or automation. Rather, it is a realization that DevOps in the enterprise needs to be more than just going faster; it needs to also focus on resiliency – It needs to help solve dependency hell.

What is Dependency Hell?

Dependency hell is a common phenomenon created in most large portfolios. The phenomenon manifests itself when applications, infrastructure, and configuration relationships are ambiguously defined, difficult to discover, poorly documented, burdensome to test, and often unique to a specific application, service, and/or component. The classic symptom of dependency hell is as follows:

  • A change is made to Application ABC; after weeks of testing, it is promoted to Production
  • Once this new version of Application ABC is deployed to Production, Application XYZ breaks for unknown reasons.
  • After analysis and debugging, it is determined that version and configuration mismatches were introduced when Application ABC was deployed inadvertently breaking Application XYZ.

The aforementioned sentiments expressed by CIOs and IT leaders are congruous with this example. How can accelerating the pace of change make things better? How can DevOps help unwind dependencies in large, complex portfolios? How does going faster, more consistently, make this better?

It’s All in the Data

DevOps isn’t just about going fast (although that is absolutely what makes it sexy); it is also about traceability, or the audit trail created as changes are built, validated, verified, and deployed. Traceability comes from data and metadata collected as changes enter and move through a pipeline. Feature type, platform version, test cases, build number, configuration parameters, etc. are all just a handful of the data types that can be captured as a change is packaged and promoted through the development lifecycle. Traditionally, IT shops manually compile this data into large configuration management databases, spreadsheets, etc. This process is typically cumbersome, error-prone, and often an afterthought particularly since data collection is labor intensive, requiring investigations into multiple tools and databases, like configuration management systems and requirements management systems.

By employing DevOps cross-functional approach and extending continuous delivery (CD) tooling, DevOps solutions can provide an automated, evergreen mechanism and method for managing portfolio dependencies. I call this the ‘dependency fabric’. The dependency fabric consists of five components, namely pipeline workflows, source data, data ingestion, data analytics, and reporting. The fabric starts when a change is promoted to production thereby triggering the data ingestion process. This data ingestion process is a collection of ETLs/APIs required to pull the latest known good application and infrastructure state of production data from the source systems. A dependency data lake is required to consolidate streams of structured and unstructured data collected from the data ingestion process. Like a cross-functional team, this data lake represents the various perspectives and opinions of the latest known good state. An analytics engine is used to manipulate and interpret the data reporting a dependency matrix that can inform various stages of the pipeline. For example, if I make API changes to Application A, the workflow and policy engines interrogate the dependency data and dynamically generate a test plan that not only test changes to Application A but also will test other applications that are dependent upon the published APIs from Application A.

DevOps dependency fabric

The dependency fabric enables the pipeline workflows and coded policies to be aware of the broader enterprise portfolio. By employing a dependency fabric and dynamically generating test plans, enterprises can begin to unwind the complex dependency hell that is impeding productivity, reducing speed, and driving cost into the IT system.

Want to learn more? Contact your EMC Rep or DevOps@EMC.com

About the Author: Bart Driscoll

Bart Driscoll is the Global Innovation Lead for Digital Services at Dell Technologies. This practice delivers a full spectrum of platform, data, application, and operations related services that help our clients navigate through the complexities and challenges of modernizing legacy portfolios, implementing continuous delivery systems, and adopting lean devops and agile practices. Bart’s passion for lean, collaborative systems combined with his tactical, action-oriented focus has helped Dell Technologies partner with some of the largest financial services and healthcare companies to begin the journey of digital transformation. Bart has broad experience in IT ranging from networking engineering to help desk management to application development and testing. He has spent the last 22 years honing his application development and delivery skills in roles such as Information Architect, Release Manager, Test Manager, Agile Coach, Architect, and Project/Program Manager. Bart has held certifications from PMI, Agile Alliance, Pegasystems, and Six Sigma. Bart earned a bachelor’s degree from the College of the Holy Cross and a master’s degree from the University of Virginia.