Making Agentic AI Practical: Infrastructure and Governance for Leaders

Discover how leaders can make agentic AI practical with the right infrastructure and governance. Learn strategies to build trust and drive real business value.

Making Agentic AI Practical: Infrastructure and Governance for Leaders 

Across Canada and around the world, I see leaders starting to move from experimentation with generative AI, to building full agentic AI systems that can take meaningful action inside their businesses.  Agentic AI promises a step change in productivity and decision quality, but it also raises new questions about governance, risk and infrastructure readiness. 

Agentic AI will land differently across sectors, yet a few themes are consistent.  You need to meet demanding expectations for privacy, security and data residency, often across multiple jurisdictions, while still moving fast enough to stay competitive globally.  Many organizations are also looking for ways to keep sensitive data close to home, even as they mix on premises infrastructure with public cloud services. 

At Dell Technologies, I work with customers who are making agentic AI real using the framework of our Dell AI Factory, and through partnerships with innovators like Cohere, whose North platform brings agentic AI workflows directly to enterprise data on Dell infrastructure.  I believe that throughout all the experimentation, we need to focus on a practical way to think about agentic AI, what changes when we implement this across an enterprise, what it means for your stack and operating model and how an AI ready infrastructure and governance model help you move quickly without compromising trust. 

AI agentic graphic

What is agentic AI, and how does it reshape your AI strategy?

When I talk about agentic AI, I am describing AI systems that do more than generate content or answer questions.  They can plan, take actions across tools and applications, react to feedback, and work toward goals on your behalf.  Rather than being a single chatbot, an agentic system is a digital workforce of AI agents that: 

  • Understand context across multiple data sources 
  • Break complex objectives into smaller tasks 
  • Call tools and systems to execute those tasks 
  • Learn and improve over time 

In practice, I focus on AI agents as software systems that autonomously make decisions and take actions to achieve objectives, often using planning, memory and reasoning capabilities.  This is a meaningful evolution from early generative AI pilots that were limited to static prompts and one-off outputs. 

When I talk with leaders about agentic AI supporting and improving business processes, I ask them to think of these agents as virtual team members inside their organization, not external tools they occasionally call.  Most teams want these agents to live fully within their control, with clear rules about where they can act, what they can access and how they interact with people and systems. 

Unlike public generative AI tools or anonymous API endpoints on the internet, agentic AI that drives your workflows, needs to run inside your business.  It needs to follow your governance, your security controls and your compliance policies, and it needs to be accountable and traceable in the same way as any other critical systems, people or processes. 

From a strategy perspective, I see agentic AI changing at least three things. 

  1. AI is no longer just a front-end experience.
    Agents sit deeply inside workflows.  They open tickets, call APIs, draft and send communications, orchestrate human approvals and connect cloud and on premises systems.  That makes your infrastructure, data and security architecture central to AI strategy, not an afterthought.
     
  2. The unit of value becomes the workflow, not just the model.
    Agentic AI combines models, tools, data, policies and orchestration into reusable patterns like customer issue resolution or contract review.  This is where Dell AI Factory is focused: providing an integrated foundation that can host many such patterns with consistent performance, security and operations.
     
  3. Risk shifts from “what did the model say?” to “what did the system do?”.
    When agents can trigger changes in systems of record, governance must move upstream into how workflows are designed, how they are monitored and who is accountable for the results.
     

For leaders in Canada and beyond, there is an additional lens.  Many organizations must operate across federal, provincial and international rules for privacy, cyber and data residency.  Agentic AI will only thrive in that context if compliance and control are designed into the stack, not bolted on per project. 

Build a single, well governed framework for agentic AI

To avoid fragmented adoption and to maximize learning, I recommend treating the agentic AI stack as an end-to-end system that covers: 

  • Infrastructure and hardware 
  • Platforms and orchestration 
  • Models and tools 
  • Workloads and workflows 

The key is to have meaningful control across all four layers so that you can prove where data resides, how models are deployed, which agents can act where and how decisions are logged and audited. 

Start with an AI ready infrastructure foundation

From my perspective, an AI ready agentic stack should give you: 

  • Control of infrastructure:
    You need the option to run high performance AI workloads in data centers and edge locations you trust, with clear security controls and data governance.  Dell AI Factory infrastructure and Dell AI Data Platform, along with validated software stacks, are designed to deliver that control for demanding AI and agentic workloads. 
  • Control of the platform:
    The orchestration layer that runs agents should be deployed in environments you govern.  With Dell AI Factory, organizations can run modern Kubernetes-based AI platforms and agentic frameworks in their own environments, spanning on premises and cloud, while keeping administrative control where their compliance teams expect it to be. 
  • Control of models and tools:
    Many organizations will use a mix of open, proprietary and domain specific models.  Dell’s ecosystem, including the Dell AI workload with Cohere North, is designed to speed up the core platform for agentic AI adoption, bringing the models to the data rather than forcing data to leave trusted environments. 
  • Control of workloads:
    Finally, your business units should be able to define, deploy and manage their own workflows inside a governed framework, not by spinning up isolated pilots across different stacks.  My focus over the last few years has been on helping customers explicitly reduce sprawl (and that focus starts with Kubernetes) and to enable secure, on premises agentic workflows at scale, which is especially important for regulated sectors like financial services, healthcare and public sector. 

This end-to-end control is valuable in any market where privacy and regulation matter, but it is especially relevant in regulated environments, where organizations often carry obligations to keep specific types of data in defined locations or under specific governance models.  An AI stack that respects those constraints from day one will be easier to explain to boards, regulators and customers. 

Why a single framework matters

The second design principle I emphasize is to build a single framework, not a collection of disconnected stacks.  If each team builds agentic AI in its own way, you will quickly accumulate: 

  • Multiple ways processes forof connecting to core systems 
  • Inconsistent security policies and identity models 
  • Duplicate integrations for the same tools 
  • Conflicting definitions of what approval or override means 
  • A growing backlog of audits and risk reviews for each separate implementation 

I have seen how this becomes inefficient, creates confusion about who is responsible for what and turns security into a constant game of catch up. 

Based on Dell’s experience with AI Factory deployments, I see organizations getting better outcomes when they define a standard architecture, governance model and set of services for AI workloads, then let individual teams compose use cases on top of that foundation.  In practice, that single framework should include at least: 

  • A shared identity and access control model for human users, agents, tools, and services 
  • Common patterns for connecting to systems of record, with reusable connectors and policies 
  • A unified observability and logging approach that captures both model behavior and downstream system actions 
  • Shared evaluation, safety, and approval mechanisms for new agentic workflows 
  • A catalog of approved tools, models, and blueprints that teams can reuse.  Providing a consistent way for agents to communicate. 

The Dell and Cohere solution is a concrete example of this principle in action.  Cohere North provides an agentic AI workspace that can connect to both on premises and cloud data sources, while Dell AI Factory infrastructure and Dell Automation Platform provide standard deployment, security and operations patterns.  In my view, that combination gives organizations a practical way to build a single framework while still moving quickly on use cases. 

Governing agentic AI: humans on the loop, accountable ownership and progress over perfection

Agentic AI does not remove humans from the picture.  It changes the role of humans from doing every step in a process to supervising, designing and owning the systems that do the steps.  I believe we must emphasize human oversight, clear responsibility and pragmatic adoption as critical success factors. 

Human on the loop and near the loop

You already know the idea of “human in the loop,” where people approve or correct each model output before it is used.  For agentic AI, that approach can be too slow or expensive in many scenarios.  In my work, I rely on two other patterns to help support the adoption of agentic systems. 

  • Human on the loop. A human supervises the system, with visibility into what agents are doing, the ability to set guardrails and the right to intervene or shut things down when needed.  Monitoring tools, dashboards and alerts make it possible to oversee multiple agents and workflows without approving every single action. 
  • Human near the loop. In some cases, a human may not be watching every workflow in real time, but the design ensures that humans remain close enough to step in quickly.  That can include clear escalation paths, thresholds for automatic handoff to a person and post action reviews for higher risk tasks.  Humans are still accountable, even if they are not watching every transaction. 

Together, these patterns let you use AI at speed .  You choose where you require approvals, where you allow agents to run within tight guardrails and where you reserve human sign off for critical exceptions.  This aligns with the broader view I share with customers, where human oversight and strategic control remain central even as agents handle more routine work. 

Someone must own the outcome

No matter how advanced your agentic AI becomes, I believe a human team and ideally a named individual must remain responsible for the output of the system.  That responsibility should be explicit. 

For each agentic workflow, I encourage leaders to be clear on: 

  • Which business function owns the process 
  • Who signs off on the policies and guardrails 
  • Who is accountable if something goes wrong on the project  

In practice, I see many organizations creating joint ownership between business, IT and risk teams, often through AI councils or product owner roles for key workflows.  Dell’s work with customers and partners shows that this cross functional ownership model is becoming a common pattern as organizations scale agents into production. 

Explicit ownership is especially important in regulated sectors.  If an agent interacts with citizens, patients or investors, regulators will want to know who is responsible.  A well governed agentic stack, backed by Dell infrastructure and controlled platforms like Cohere North on premises, makes it easier to answer that question with confidence, because you can demonstrate where the system runs, how it is configured, and who has access. 

Do not wait for the perfect stack

There is a real risk in spending so long designing or deciding on the “perfect” agentic architecture that you never deliver outcomes, while the technology and your competitors move ahead.  I still meet organizations that are stuck in pilot mode for AI, often due to complexity, security concerns and lack of internal expertise. 

A better pattern, in my experience, is to start small on a solid foundation, then iterate.  (This is true for all AI implementations) 

  • Pick a handful of high value, low to moderate risk workflows where agents can save time or improve quality. 
  • Deploy those workflows on a standard platform such as a Dell AI Factory, which is engineered to deploy in weeks, not months, and to reduce integration complexity. 
  • Use those early deployments to refine your governance patterns for human in, on and near the loop while strengthening logging, evaluation and handoffs 
  • Codify what works into your enterprise framework and reuse it for the next wave of use cases. 
  • As you realize ROI from these AI use cases, allow those to fund your next project, creating a flywheel of success and a revenue stream to take on new, strategic processes. 

This is exactly how I see Dell bringing agentic AI to market, with a focus on simplifying deployment, maintaining control of data and accelerating time to value.  Organizations in Canada, and globally, can leverage this combination to turn strategic AI ambitions into practical business outcomes. 

Where I see Dell helping leaders in Canada and beyond

Putting this together, I believe an AI-ready, agentic strategy for Canada and for global markets should: 

  • Treat governance and control as full stack concerns across infrastructure, platforms, models and workflows. 
  • Establish a single, reusable framework for agentic AI to avoid fragmented, hard to govern stacks. 
  • Keep humans on and near the loop, with clear accountability for every agentic workflow. 
  • With clear metrics, move quickly from pilots to outcomes on top of a trusted, scalable foundation. 

Dell Technologies is focused on exactly this intersection.  The Dell AI Factory provides the infrastructure and services to support agentic AI from edge to core to cloud, while our ecosystem partners deliver secure, enterprise grade agentic workflows that respect organizational governance and data control requirements.  For Canadian leaders, I see that combination as a path to adopt agentic AI at speed, within a well-governed architecture, and in a way that can scale across global operations. 

The next step is not to design the perfect future state, but to choose a first agentic workflow, put it on an AI-ready foundation and learn your way forward.


To learn more: 

About the Author: James Scott

James Scott is the Canadian Field Chief Technology Officer for Dell Technologies. His expertise encompasses cloud architecture, modern application design, artificial intelligence, and IT security. James has been instrumental in assisting clients worldwide with the design, security, and maintenance of multi-cloud environments, and he plays a pivotal role in how organizations are looking to deploy and leverage artificial intelligence to drive productivity and simplify business operations.