Key Takeaways
- Protecting AI workloads doesn’t mean starting from scratch; adapt your existing security tools.
- Cut through the noise: learn how to identify AI risks, like prompt injection attacks, and dispel common myths.
- Strengthen your AI resilience with layered defenses, robust identity management, vigilant monitoring, and proven recovery strategies.
Artificial intelligence is top of mind for our customers, and securing those AI workloads is a critical priority. But often, the myths around AI security can get in the way and can stifle the adoption of AI. To help clear things up, I sat down with my colleague Chris Cicotte, a senior cybersecurity consultant at Dell, to tackle some of these common myths and discuss practical solutions to address them. We even dove into a real-world attack scenario to see how these principles apply in action.
For more resources and short reads, explore Dell’s Cybersecurity Awareness Month (CAM) hub on Securing AI.
The following has been edited for length and readability.
Are AI systems becoming too complex to secure effectively?
Cicotte: There’s a misconception around AI systems becoming too complex to secure effectively. While AI does introduce new security risks like prompt injection and data manipulation, these challenges are manageable with the right approach. Robust security measures are critical to protect AI systems from both traditional and AI-specific threats.
What is the right approach exactly? Does this mean that I need to buy all new tools to secure AI?
Cicotte: Securing AI should focus on enhancing existing tools, not starting from scratch. Many of our customers have a great foundation for security already, and many of their existing tools can be adapted. Think of AI as just another business workload, but one with unique characteristics requiring a tailored security practice. Foundational practices like identity and access management, network segmentation, and endpoint protection remain essential. What’s important is tailoring these practices to address AI-specific risks such as protecting training data, securing algorithms, and mitigating risks like adversarial inputs.
In thinking about the new types of AI risks that you mentioned earlier, I have a scenario to run by you. Let’s say you work in customer service for an airline that uses an AI chatbot. You’re flooded with calls from customers who can’t access their frequent flyer miles. Upon investigation, you notice errors in the logs like “syntax error in SQL statement.” What type of cyber incident would you say this is?
Cicotte: This would be a prompt or an SQL injection attack. Those errors indicate that attackers exploited the chatbot’s input fields with malicious SQL code to access or alter customer account data.
So, you realize you’ve been hit with this SQL injection. What would you recommend that this customer does next?
Cicotte: Immediately take the chatbot offline. Start to investigate the database logs for unauthorized access and ensure compliance with any disclosure laws depending on your location. These steps are critical to stop exploitation, assess damage, and meet regulatory obligations.
What would you recommend putting in place in the future to help stop or mitigate similar prompt or SQL injection attacks?
Cicotte: Development teams need to be trained to use prepared statements or parameterized queries. You should also enforce least privileged access with multi-factor authentication, role-based access control, and use a web application firewall to limit the impact of attempted injections.
How do you get that customer data back? What do the recovery steps look like in this scenario?
Cicotte: Recover that data from the latest uncompromised backup. This is one of the first things to look at, because attackers will often target backups as well. Then you must notify customers, reset their passwords, and advise them to monitor their credit card and account activity.
That scenario really highlights how fast things can escalate—and how critical it is to have both proactive and reactive measures in place. But securing AI isn’t just about responding to attacks or protecting data. It’s about understanding the full scope of what needs to be secured and how to build resilience across the entire AI ecosystem.
Now, when we’re thinking about securing AI, is it enough to just focus on securing the data?
Cicotte: Comprehensive AI security means safeguarding the entire ecosystem: the models, APIs, outputs, systems, and devices, not just the data. As AI becomes more central to critical applications, the risks increase. Models can be tampered with, and APIs can be exploited. You need to build a multi-layer defense to protect models, secure APIs with robust authentication, and monitor outputs for suspicious patterns. Comprehensive AI security fosters trust by ensuring systems are reliable and resilient.
With AI becoming more autonomous, why is human oversight still so critical when it comes to cybersecurity?
Cicotte: Human oversight is essential for ensuring AI systems operate ethically, responsibly, and align with human values. It empowers us to build trust and maintain control. You must establish clear boundaries through governance, implement layered controls that allow for human intervention in critical decisions, and promote transparency. Ultimately, human oversight ensures that AI’s evolution is guided by human values, helping create predictable and reliable systems we can trust.
Before we close, are there any last thoughts you want to end with?
Cicotte: Let’s highlight some key takeaways. First, use a layered security architecture with segmentation, firewalls, and strong authentication. Second, monitor and validate outputs using anomaly detection and logging. Third, plan for resilience with regular backups and tested disaster recovery plans. Finally, it’s essential to train your staff by educating teams on secure development and threat recognition.
Want to go deeper? Explore Dell’s CAM hub on Securing AI, the Prompt and SQL Injection brief and take the Cyber Resilience Assessment to benchmark your readiness.


