I recently had the opportunity to work with IT analyst Dennis Drogseth (Enterprise Management Associates) on a video and whitepaper about IT systems transformation as an enabler of digital business transformation. This got me thinking about the fundamental definition of what a system is.
In my college days, I became fascinated looking at the world in terms of systems. So much that I moved from economics to sociology and eventually landed on natural systems (i.e., biology). In my senior year, I got hooked on programing and, after graduation, moved into the world of IT – another kind of system.
Mere lists or related groups of things (natural or man-made) do not make a system. To paraphrase one of the early general systems theorists, Ludwig von Bertalanffy, systems are groups of interconnected, co-dependent parts joined by processes that keep everything working together (e.g., negative entropy, homeostasis, etc.).
Systems thinking is a fundamentally better way to get your arms around many complex issues. Recall how in grammar school you learned about natural systems like the weather (i.e., the water cycle) and food chains (e.g., an ecological cycle) – and later on about social, economic and political systems.
Let’s consider “IT systems”—a term that has been around for a long time. It is most broadly defined by ITIL as a system involving people, processes and technology. But how we put IT technology (e.g., data center infrastructure) together largely defines the degree of its “system-ness.” And the more the system-ness, the better to outcome for businesses and IT operations.
Traditionally, and even today, data centers are largely do-it yourself (DIY) integrations of multi-vendor components (compute, storage, network and hypervisor). DIY infrastructure is either fully home-grown, or, at best, it uses a reference architecture so you don’t have to figure everything out yourself.
But after you connect the parts, you still have to manage, support and sustain DIY infrastructure as technology silos. This has huge drawbacks:
- You will struggle with firmware upgrades and their potential incompatibility across multi-vendor gear.
- You must call each component vendor separately for support.
- Even monitoring tools that display all components in a “single pane of glass” don’t show you how their relationships impact total system and workload status.
- Provisioning will still be siloed, so you resort to bolt-on “orchestration” tools.
Converged infrastructure changes this. It is comprised of pre-integrated multi-vendor compute, storage, network and virtualization components that arrive on your loading dock as a single product ready to run.
But this new definition of “system-ness” extends throughout the life cycle in four ways.
- Built-in monitoring and management software that is Architecture-aware. (e.g., shows total system health including status and relationships of major components, their subcomponents, and the workloads they support)
- Built-in “call home” technology to a single source of live, on-call support (e.g., one phone call to make for help; no more calling multiple equipment manufacturers)
- Built-in awareness of firmware and hypervisor release levels, including when they need to be upgraded, and a way to download new releases, pre-tested for multi-vendor component interoperability
- Like natural systems that are self-directing/regulating, IT infrastructure system-ness is increasingly enriched with software-based automation – for continuous processing, data protection and more.
The video and whitepaper that I previously referenced will give you a real-world converged infrastructure perspective on the modern definition of IT systems.