Here’s How to Use GenAI Safely in the Enterprise

Five things to consider when building a safe GenAI strategy for any organization.

By Bobbie Stempfley, VP and business unit security officer, Dell Technologies

Artificial intelligence has been a research area since the 1950s, but it’s only in the last decade that we’ve seen it really take off. Machine learning, and now generative AI (GenAI), are on the verge of creating monumental business outcomes. According to Dell’s Innovation Index, the pressure is on to embrace this exciting technology. Almost six in 10 respondents worry that their current pace of innovation will leave their organization irrelevant in the next five years.

Before we rush to AI, though, we must think about how we can use it securely in the enterprise. As vice president and business unit security officer at Dell Technologies, I spent a lot of my time deciphering the security implications of new technologies. Here are five things to consider when building a safe GenAI strategy.

Attacks should be expected

Organizations are preparing for AI against a backdrop of rising cybersecurity threats. If there’s one thing that cybersecurity professionals know, it’s that it isn’t a question of if we’ll be attacked, but when.

More of our data and business processes exist in that environment, incentivizing adversaries who have also become more sophisticated. The complexity of our digital environments has also grown, expanding the attack surface that they can target. Our data no longer lives in a fortress with a clear perimeter. It’s on our endpoints, everywhere.

Moreover, as Frank Abagnale Jr (the inspiration for the movie Catch Me If You Canpointed out in a conversation with me, AI empowers criminals to mount more convincing attacks far more easily and at scale.

AI increases the attack surface

AI can help us deal with our cybersecurity challenges, but it also intensifies them. On the positive side, there’s a great opportunity for generative AI to increase the effectiveness of prevention and defense. It can streamline security operations by improving analytics. It can benefit software developers by helping to find and fix flaws earlier and faster. However, we must also think about how AI might increase our attack surface and decide whether our existing prevention and protection measures are adequate.

One of the biggest concerns for companies implementing AI is data complexity. This technology uses statistical models that require huge amounts of data to train. That training happens repeatedly to accommodate new data as it becomes available to keep the AI models relevant and accurate. Enterprises have to manage the sourcing, communication, quality, integrity and security of that data, much of which will be sensitive.

According to Dell’s Innovation Index, concern over cybersecurity is one of the biggest technology barriers stopping companies from innovating with data on insecure edge devices. As AI evolves the endpoint will become more and more a key part of an organizations AI transformation requiring increased attention. The tension between security and innovation is one of the biggest paradoxes facing organizations today.

Security is a critical part of AI strategy

The added risks associated with AI mean that CISOs and CIOs must prioritize cybersecurity as part of their broader AI strategy. This means thinking about AI and the resources that it uses from a business perspective, not just from a technology perspective.  People and process remain vital elements of the overall strategy.


Security teams should be part of a cross-disciplinary task force that explores the goals for AI, not just on a per-project level but as an overall initiative that will span multiple departments and processes. Understand the business outcomes that you’re hoping to achieve and how to evaluate them.


Knowing how AI affects your business processes will help you to evaluate its technical impact on cybersecurity. What types of AI will you need to achieve these goals, and what resources do they rely on?

Understanding those resources—particularly your data and its underlying infrastructure —is critical in evaluating AI’s cybersecurity implications. Often, organizations build data lakes—large collections of structured and unstructured data—without truly grasping all the data within it. Now is the time to answer questions about this data. Who owns it? What permissions do you have when using it? How sensitive is it? How high-quality is it?

Your experience with technology and business processes goes hand in hand. You must identify which business processes use your data and how. How critical are these processes to the business? This will help you to evaluate your risks when accessing that data for AI purposes.


Acknowledge people as a crucial part of those AI-driven business processes. Dell has advocated educating staff to become cybersecurity-literate for years. Now, cybersecurity professionals must educate them again to cope with the emerging security risks and opportunities around AI.

Understanding your processes and resources is vital when modeling AI-related threats to your enterprise. Explore threat actors’ tactics, techniques and procedures in relation to these processes and data infrastructure. Knowing how attackers will come at your AI workflows will give you an advantage, enabling you to put the right controls in place at appropriate points in the organization.

Wash, rinse, repeat

This evaluation isn’t something that you can do once and forget about. At Dell, we think about cybersecurity as a holistic and evolving discipline. You must revisit and refine your safety and security strategy in the context of AI, just as you should elsewhere.

AI is evolving at breakneck speed. Today’s tools are already far more sophisticated than they were two years ago, and your cybersecurity strategy must develop to reflect that. For example, two years ago, companies would not have considered how to protect their data while conducting prompt engineering. Now, with generative AI and large language models, that’s a clear requirement that also carries implications for employee awareness training.

The increasing importance of data in an AI context will also prompt many companies to consider a Zero Trust model. Zero Trust principles can help cybersecurity teams ensure least-privilege access to data by authenticating endpoints and the people using them.

Communication is key in protecting AI-driven systems

As a wide-ranging, cross-disciplinary technology, AI will involve many stakeholders from different areas of the enterprise. Simply asking for more money to implement the necessary controls isn’t enough. Security teams must learn how to communicate the importance of cybersecurity so it resonates with decision-makers. Use your command of business processes to describe the risks and mitigation needs to business leaders. What business metrics would an attack on an AI data pipeline affect? You can use cybersecurity and AI risk frameworks like NIST’s to help drive these conversations.

Even if AI doesn’t feature highly in your business strategy today, it will likely gain more traction in time as more companies adopt it as a competitive asset. Many companies are just beginning their AI journey, and the technology itself will evolve in unexpected ways. Organizations must have strategies to adapt while still protecting their workflows and underlying data. Prepare today, because AI promises to change the way that we do business tomorrow.