Efficiency vs. Brute Force: Two Paths to Cyber Resilience

How systems scale—not just how much—determines complexity, efficiency, and long-term cyber resilience at enterprise scale.

Key takeaways:

    • Not all scaling models are equal—how systems scale determines long-term outcomes
    • Scale-out architectures introduce overhead that compounds with growth
    • Efficiency-driven designs reduce infrastructure demands while maintaining performance
    • Recovery performance and speed are not sacrificed in efficiency-driven architectures
    • The most scalable systems require less infrastructure to deliver more

In my last post, I unpacked what I called the scale-out illusion—the idea that adding nodes automatically delivers simplicity and scalability.

Many modern data protection and cyber resilience platforms are built on this scale-out model—adding nodes, compute, and storage as environments grow.

But to understand why that model often breaks down at enterprise scale, we need to look more closely at how these systems actually behave.

Because not all scaling models are the same.

Some solutions rely on adding infrastructure to maintain performance as data grows. Others are designed to maximize efficiency from the outset—reducing the amount of data stored, moved, and managed.

A common assumption is that reducing the amount of infrastructure—less compute, less flash, fewer nodes—comes at the expense of recovery performance and speed.

In practice, that’s not the case.

The difference lies not in how much infrastructure you add, but in how efficiently the system operates as it scales.

Scaling isn’t neutral

At a high level, most modern cyber resilience platforms scale in one of two ways.

Some scale by adding resources in parallel—more nodes, more compute, more storage, more networking. Growth is achieved by distributing workloads across an expanding cluster.

Others scale by maximizing efficiency within the system itself—reducing the amount of data stored, minimizing resource consumption, and optimizing how work is processed.

Both approaches can deliver capacity and performance—but their longer-term impact on complexity, efficiency, and predictability is very different.

The hidden cost of distribution

Scale-out architectures rely on distribution.

As nodes are added, data is distributed across more systems, increasing the amount of coordination, metadata, and data movement required to keep the environment operating. At smaller scale, this is manageable.

But as environments grow, the system has to do more work just to function:

    • More coordination between nodes
    • Greater metadata overhead
    • Increased coordination and data movement across the environment
    • More variability in performance as workloads distribute unevenly

In other words, scaling out doesn’t just add capacity.

It adds system overhead.

And that overhead grows with every additional node.

At scale, this isn’t just a technical consideration; it directly affects how predictable, efficient, and manageable the environment becomes.

For IT teams, this often translates into more time spent managing infrastructure, troubleshooting performance variability, and maintaining increasingly complex environments.

For executives, the impact is broader. Predictability and recovery performance are not just technical metrics—they directly influence business continuity, risk exposure, and the ability to operate with confidence under pressure.

When efficiency changes the equation

Efficiency-driven architectures take a fundamentally different approach.

Instead of distributing the problem across more infrastructure, they reduce the problem itself.

Less data stored.
Less data moved.
Less infrastructure required to manage it.

That changes the scaling curve, and the outcome, entirely.

Because when systems are designed to do more with less:

    • Metadata remains controlled
    • Resource consumption grows more slowly
    • Performance remains more predictable
    • Operational complexity doesn’t compound with scale

The system doesn’t just grow.

It stays manageable as it grows.

A common assumption is that scale-out architectures are required to maintain recovery performance as environments grow.

But in fact, architectures designed around efficiency are often better positioned to deliver consistent, predictable recovery performance at scale—without requiring continuous infrastructure expansion.

Why this matters for cyber resilience

Cyber resilience places unique demands on infrastructure.

It’s not just about storing data.

It’s about:

    • Protecting large volumes of data consistently
    • Detecting anomalies across that data
    • Recovering quickly and predictably under pressure

These are system-wide operations.

And their success depends heavily on how efficiently the underlying platform operates at scale.

Architectures that rely on brute-force expansion often find themselves working harder over time just to maintain the same outcomes.

In contrast, architectures built on efficiency preserve headroom—both operationally and technically.

A more important question

For years, the IT industry has framed scaling as a question of how easily you can add more infrastructure.

But that’s no longer the right question.

The better question is:

How much infrastructure do you need to add in the first place?

Because at enterprise scale, the difference between those two approaches becomes significant.

Not just in cost.

But in complexity, predictability, and ultimately, resilience.

Setting the stage

This distinction—between scaling by distribution and scaling by efficiency—is foundational.

Because it leads directly to a broader realization:

Architecture is not just an implementation detail. It determines outcomes.

In my next post, I’ll explore that idea more directly—looking at how architectural decisions shape everything from performance and scalability to operational simplicity and long-term sustainability.

And why, in cyber resilience, those decisions matter more than most organizations realize.

Explore how we can help you scale cyber resilience efficiently and predictably. Click here.

About the Author: Simon Jelley

As Vice President and General Manager, Simon Jelley leads the Cyber Resilience portfolio at Dell Technologies. He focuses on empowering organizations to secure their most critical asset — data — against an evolving landscape of threats. By delivering innovative solutions that tackle complex challenges in cyber security, business resilience, and AI, Simon helps customers maintain continuity and confidence in a digital-first world.

With a professional heritage spanning more than 25 years in the Data and Information Management landscape, Simon brings deep expertise in backup and recovery, SaaS protection, and cloud data management. His career is defined by a commitment to technical leadership and a passion for solving real-world problems.

Throughout his tenure, Simon has established a strong track record of driving business transformation. He specializes in scaling SaaS and Enterprise data management businesses from concept to global scale, ensuring that technology adapts to meet human needs. Simon believes that when data is secure and accessible, businesses can focus on what matters most — innovating and moving forward.