

Agentic AI
Practical Steps For Securing Agentic AI
In the first part of this series I focused on why the AI threat model is shifting from syntax to semantics and what that means for data, supply chain and runtime security. In this second part I’ll cover the two further points with a focus on agentic AI, an AI ready foundation from edge to cloud and practical patterns for learning your way into secure agentic AI.
How does agentic AI change our security and risk model?
So far we have largely been talking about systems. Agentic AI brings us back to actors.
When I work with customers on agentic AI, I encourage them to think of agents as virtual team members living inside their organization. These agents can understand context across multiple systems, break down complex tasks, call tools, make decisions and trigger changes in systems of record. They are persistent, stateful and increasingly capable.
I think about securing agentic AI as a control loop across four stages:

From a security perspective, that has two important implications.
First, agents sit inside workflows and infrastructure, not just at the edge. A compromised chatbot may leak data. A compromised agent may modify records, approve transactions or change configurations if it’s not controlled and monitored correctly. Every new agent extends the potential blast radius if it is misconfigured or abused.
Second, agents blur the line between code and identity. They act on behalf of people and systems, sometimes autonomously, but today often under human supervision. That means they need to be treated with the same discipline as users and devices.
This is where Zero Trust provides a useful mental model and lets us build on existing security frameworks, again we’re building on top of existing frameworks and controls. If your existing security posture assumes “never trust, always verify” for human users and endpoints, the natural next step is to extend that posture to agents.
In practice, this means:
- A registry of agents and tools: You should be able to answer, at any moment, which agents exist, who owns them, which tools they are allowed to call, and which data they can access. That registry should align with your identity and access management systems, not live in a spreadsheet on someone’s desktop.
- Just-in-time, least-privilege permissions: Agents should receive access only when they need it, for specific tasks and limited durations, and that access should be automatically revoked when the work is complete. This mirrors modern just-in-time access patterns for admins and privileged users, and it significantly reduces the window in which a compromised agent can do damage.
- Oversight and visibility into inputs, steps and outputs: Security, risk and operations teams need clear telemetry on what agents are doing: which prompts they are receiving, which tools they are calling, which actions they are taking and how often. That includes anomaly detection for unusual patterns and kill switches for high-risk workflows.
As I talked about in my last post (Making Agentic AI Practical: Infrastructure and Governance for Leaders | Dell), governance patterns like human in the loop, human on the loop and human near the loop become even more important as agents take on more responsibility. Someone has to own the outcome of each agentic workflow, even if the agent executes most of the steps. That ownership should be explicit: which business function is responsible, who signs off on the guardrails and who is accountable if something goes wrong.
How can Dell provide an AI‑ready, secure foundation from edge to cloud?
All of this can sound daunting, particularly if your organization is still early in its AI journey. The good news is that you do not have to design and build every piece from scratch.
At Dell, we are focused on helping organizations deploy AI on an AI-ready foundation that brings these security concepts together across infrastructure, platforms, models and workflows.
With Dell AI Factory, customers can run high-performance AI workloads in data centers, on modern client platforms like Windows 11 and at the edge utilizing Dell NativeEdge, all with clear control over where data resides, how models are deployed and how workloads are monitored and governed. Validated hardware and software stacks help reduce complexity and improve resilience, while integrated storage and data protection platforms support robust backup and recovery for critical AI data.
On top of that infrastructure, our ecosystem of partners brings enterprise-grade AI platforms, models and tools into a governed framework. That includes curated model catalogs and integrations with marketplaces like Hugging Face, agentic platforms like Cohere North, and client security embedded into Windows 11. Enabling organizations to experiment with a wide range of models while still maintaining control over IP, security and compliance.
For agentic AI specifically, we are working with customers to design single, well-governed frameworks that span agents, tools, data and workflows, rather than allowing each team to build isolated stacks. The goal is to give organizations a repeatable pattern for deploying agents on top of a secure, resilient platform, with shared identity, observability and governance.
None of this eliminates the need for careful design, continuous monitoring or strong internal controls. But it does mean you can start from a foundation that understands AI’s unique demands, rather than retrofitting yesterday’s platforms for today’s workloads.
Learning your way into secure AI
The final point I want to emphasize is that waiting for the perfect AI security architecture is itself a risk. The technology is moving quickly. So are your competitors and, unfortunately, potential attackers. Otherwise, you can spend so long trying to build the perfect AI Factory from scratch, that the factory never outputs anything.
A more sustainable approach is to:
- Start with a clear view of your data, regulatory obligations and risk appetite.
- Choose a small number of high-value, low to moderate risk AI use cases that you can deploy on a secure foundation.
- Build in security, governance and observability from day one, including the patterns we have discussed for data control, supply chain, secure DevOps, runtime and agents.
- Use those early deployments to refine your frameworks, then scale what works.
In my experience, organizations that approach AI this way move faster and with more confidence. They are able to have credible conversations with boards, regulators and customers about where data lives, how models are governed and who is accountable for outcomes.
AI security does not have to be an obstacle to innovation. Designed well, it becomes an enabler. The foundation that lets you adopt generative and agentic AI at speed, while protecting the data, systems and people who rely on it.
