Patching Missteps Are Not an Excuse to Blame Victims of Ransomware Attacks

“It’s their own fault. They wouldn’t have been hit if they’d kept up with their patches and updates.”

How many times did we hear this line in the wake of WannaCry, Petya and virtually every other cyber attack that has exploited known vulnerabilities in recent years? To hear the Monday-morning quarterbacks talk, you’d think data security teams the world over were either lazy, unknowledgeable or both if they fell victim to one of these massively successful cybercriminal ventures.

While it’s true that some of this year’s major ransomware attacks could have been avoided with timely patching, blaming the victim is naive.

For mid-sized and larger organizations with an average IT department, patching is not an easy feat – it’s challenging, time-consuming and rife with issues.

The Scale Issue

It may be relatively easy to keep up with one or two software and OS updates when you’re working with a personal computer and a handful of applications. However, for IT teams responsible for updating thousands of systems, the number of patches needed per month is not one or two. It could be over 100!

I recently counted that an average 500-bed hospital uses about 460 applications. Every application requires updates and patches on an ongoing basis. Moreover, the most common apps – Flash readers, web browsers and OSes – require more frequent attention. Finding and attacking vulnerabilities is time-consuming and expensive for cybercriminals. So by targeting common apps, they get a bigger bang for their buck. Luckily for cybercriminals, these apps tend to be rife with vulnerabilities.

Let’s not forget that the existence of these vulnerabilities is not the victim’s fault – it’s the vendor’s. And while vendors receive their share of negative attention when vulnerabilities are revealed, for some reason we find vulnerabilities much less baffling than a victim‘s inability to keep up with the demands of applying the patches.

The Domino Effect

If updates and patches could be rolled out without side effects, they would be slightly more manageable. But this isn’t the case either.

Anyone who has worked for a large company knows firsthand the collective groan that spreads when the IT team announces updates. Updates are inconvenient – work comes to a standstill while employees download and reboot. And inevitably, there are issues.

Maybe a few employees’ VPNs no longer work. Maybe their multi-factor authentication becomes buggy. The reality is that most updates bring with them an array of complications and a flurry of help-desk calls, so IT teams plan for updates with this expectation.

The Offline Challenge

Of course, for every device that experiences an issue after an update, there’s another device that doesn’t receive the update at all. Endpoint security updates are typically pushed through an endpoint management console. If a device is not connected to the company’s network or not turned on when a patch is pushed, it will miss the update. If the user has administrative control, which is more common than you would think, he or she can opt out of the update. If either of these scenarios happens enough, the company suddenly finds itself with a massive data security gap.

Ideally, IT figures this out and fixes it quickly. But we don’t live in an ideal world – we live in one that makes patching thousands of endpoints highly challenging. And it’s only one item out of many on the average IT team’s checklist.

Patching Is Good. Endpoint Security That Works Is Better.

Don’t get me wrong. Patching should unequivocally be a priority of every IT team. A good strategy is to prioritize updates so that the most mainstream products, such as apps, browsers, and OSes, get the top spot.

But when a ransomware attack or other exploit succeeds, we shouldn’t simply be asking why the victims weren’t up-to-date. We should be asking what else broke down in the data security chain that allowed the compromise to happen.

Did a software provider prioritize UI over security in their rush to market, allowing the vulnerability to exist in the first place? Did an endpoint security solution fail to stop a known threat? Was the victim relying on 10-year-old technology that simply is no longer equipped to stop modern threats?

There are many reasons security programs can fail to stop a threat. It’s time to change the conversation to offer a more comprehensive outlook on why breaches succeed. Otherwise, the blame will continue to be passed, and victims will continue to feel defenseless no matter how hard they try to keep up with changing data security demands. Even worse, cybercriminals will continue to succeed in their attack ventures, draining companies of millions more dollars and the entire industry of peace of mind.

About the Author: Brett Hansen

Brett Hansen is Vice President, Dell Unified Workspace. In this role, he is responsible for developing solutions that enable customers to simplify and streamline their client lifecycle, secure their endpoints, and ultimately provide users with a more productive and modernized workspace environment. With Dell Technologies uniquely positioned to deliver these solutions, Mr. Hansen harnesses capabilities from Dell Client, Dell Services, VMware and Secureworks to deliver integrated solutions spanning hardware, software and services. These technologies are optimized on Dell Client portfolio, but also embrace the multi-OS and device heterogenous environments of our customers, ultimately providing them with the choice, simplification, and productivity improvements they desire. Brett engages with customers, channel partners and product developers on a daily basis, leveraging his more than 15 years of experience leading business development and channel functions in the software industry. Brett joined Dell after 12 years with IBM Software Group. In his last position at IBM, he served as Director, IBM Tivoli Demand Systems Marketing where he held global responsibility for generating and managing the Tivoli pipeline.