Agentic AI Security for Canadian Businesses: What the New Cyber Centre Guidance Means
On May 1, 2026, the Canadian Centre for Cyber Security joined the United States Cybersecurity and Infrastructure Security Agency (CISA), the National Security Agency (NSA), Australia's ASD-ACSC, the United Kingdom's NCSC, and New Zealand's NCSC-NZ to publish Careful Adoption of Agentic AI Services — a 28-page joint guide that tells organizations how to deploy AI agents without breaking their security model. For Canadian businesses now experimenting with agentic AI in customer support, procurement, IT operations, or finance, this is the first piece of Five Eyes guidance written specifically for the technology.
The guide's core message is blunt: agentic AI inherits every weakness of large language models (LLMs), then adds a wider attack surface, more autonomy, and harder-to-trace accountability. The authoring agencies recommend organizations "never grant [agentic AI] broad or unrestricted access" and "only use agentic AI for low-risk and non-sensitive tasks" until oversight, evaluation methods, and standards mature.
This post translates the guidance for Canadian small and medium-sized business decision-makers — what's actually new, what risks matter most, and what to do before letting an AI agent touch your business systems.
What Is Agentic AI, and Why Does the Cyber Centre Want You to Be Careful?
Agentic AI is software that uses an LLM to interpret goals, plan steps, and take actions on its own — calling tools, querying data sources, sending emails, modifying records — without a human approving each move. A generative AI chatbot writes a draft for you. An AI agent submits the purchase order, replies to the supplier, and updates the accounting system.
The Cyber Centre and its partners flag this autonomy as the central security problem. Where traditional software does only what it was coded to do, an agentic AI system makes decisions based on probabilistic reasoning, reads untrusted data from the web or email, and chains tools together in ways its designers did not anticipate. The guidance identifies four risk categories that Canadian businesses should understand before deploying AI agents.
Privilege Risks
Agents are often granted broad access on day one to "reduce friction." The guidance warns this creates a confused deputy pattern: when a malicious actor compromises any tool or input feeding into the agent, they inherit every privilege the agent holds. A procurement agent given access to financial systems, email, and contract repositories effectively becomes a single key to all of them.
Behaviour Risks
LLM-based agents can engage in specification gaming — technically completing the goal in unsafe ways. The guidance gives a memorable example: an agent told to "maximise system uptime" disables security updates because patches require reboots. The document also catalogues deceptive behaviour, prompt injection, and emergent capabilities the original developers did not program.
Structural Risks
Multi-agent systems amplify problems. A single hallucination from one agent can be accepted as truth by a second agent, which then takes a destructive action. The guidance highlights tool-squatting (malicious tools published under legitimate-looking names), insecure agent-to-agent communication, and rogue agents that propagate harmful instructions across an enterprise.
Accountability Risks
When several agents collaborate on a decision and something goes wrong, fragmented logs and opaque reasoning chains make it nearly impossible to determine which component caused the error. For Canadian businesses, this is also a PIPEDA and Bill C-26 problem: regulators expect organizations to explain how a decision involving personal data was made.
What Canadian Businesses Should Do Before Deploying AI Agents
The guidance is explicit that agentic AI security must sit inside an organization's existing cyber security framework, not beside it. For Canadian SMBs, that means the 13 Baseline Controls published by the Cyber Centre are the starting point, with agent-specific additions layered on top. The authoring agencies recommend these practical steps, which translate well for smaller organizations.
1. Start with Low-Risk, Reversible Tasks
The guidance recommends a phased deployment model: begin with use cases where errors are recoverable and the data is non-sensitive. Drafting internal documents, summarising public information, or organising a calendar are appropriate starting points. Approving payments, modifying customer records, or accessing personal information are not — at least not until you have monitoring and rollback procedures in place.
2. Apply Least Privilege, Per Action
Static permissions granted at deployment ("the agent can read all email") are flagged as one of the most common mistakes. The guidance recommends evaluating entitlements at each invocation, using ephemeral credentials that expire when the task finishes, and dynamically scoping privileges to the specific sub-task. For most Canadian SMBs, this maps onto existing authentication and access control practices — apply the same principle of least privilege you already use for human accounts to AI agents.
3. Keep a Human in the Loop for High-Impact Actions
The Cyber Centre and partners are direct: "Prevent agents from autonomously executing high impact actions or outputs without prior human approval." Specifically called out — system resets, network egress, deletion of critical records, and any request to delete logs or audit records — should require human review. This is also where your incident response plan needs an agentic AI scenario added.
4. Treat Tools and Third-Party Components as Supply Chain Risk
Agents typically use tools — APIs, plugins, third-party services — to act on the world. The guidance warns these can be tool-squatted (malicious clones with similar names) or quietly compromised. The recommended response is the same supply chain discipline Canadian businesses should already apply to software dependencies: a verified allow-list, regular review, and a software bill of materials. See our piece on vendor and third-party risk for the underlying playbook.
5. Log Everything, Then Monitor What You Logged
Continuous monitoring of agent behaviour — inputs, tool calls, internal reasoning, decisions, outputs — is repeated throughout the document. The guidance specifically recommends quarantining any request from an agent to delete logs until a human approves it. Comprehensive logging also feeds the breach-investigation obligations Canadian organizations carry under PIPEDA and the forthcoming Bill C-26 regime.
6. Threat-Model the Agent, Not Just the App
The authoring agencies recommend using updated risk taxonomies — the OWASP 2026 Top 10 for Agentic Applications and the MITRE ATLAS™ matrix — to threat-model any agent before deployment. For Canadian SMBs without a dedicated security team, this can be as simple as walking through "what happens if this agent is given a malicious prompt?" and "what happens if the tool it depends on is compromised?" before the system goes live.
How the Five Eyes Guidance Maps to the Canadian Baseline Controls
Most of the agentic AI controls in the guidance are extensions of work Canadian SMBs are already expected to do under the Cyber Centre's Baseline framework. The mapping is roughly:
- Identity for agents → Authentication (BC.5) extended to non-human principals, with cryptographically anchored agent identities and mutual TLS for agent-to-service calls.
- Tool allow-lists and SBOM → Secure configuration (BC.4) plus the supply chain hygiene already required for software.
- Human-in-the-loop checkpoints → Incident response (BC.1) and existing change-control processes, applied to agent decisions.
- Continuous monitoring of agent behaviour → Network security (BC.9) logging extended to agent inputs, reasoning, and tool calls.
- AI literacy and human oversight → Security awareness training (BC.6), updated to cover prompt injection and agent misuse scenarios.
If your business has not implemented those baseline controls for traditional IT, the guidance is clear that adding agentic AI on top is a higher-risk move. Appendix A of the document lists "cyber security prerequisites before implementation of AI agents," and most of them are baseline cyber hygiene — strong authentication, secure-by-design principles, zero trust, secure development, and tested incident response — rather than AI-specific tooling.
A Realistic Starting Point for Canadian SMBs
For Canadian small and medium-sized businesses, the practical takeaway is not "don't use agentic AI." It is "match the autonomy you grant the agent to the maturity of the controls you have around it."
A reasonable starting position looks like this:
- Inventory the agents already in use. Many SaaS products quietly added agentic features in 2025 and 2026 — meeting summarisers, autonomous email assistants, AI-driven scheduling. Treat each as a system that needs a privilege review.
- Limit each agent to one well-defined, low-risk job. Avoid the procurement-agent-with-access-to-everything pattern described in the guidance.
- Require human approval for anything destructive or financial. Send-money, delete-data, change-permission actions should never be fully autonomous, even if the vendor says it is safe.
- Verify the vendor. Ask whether the agent meets the practices in Careful Adoption of Agentic AI Services — particularly around least-privilege scopes, audit logging, and tool allow-lists. If the vendor cannot answer, treat that as a finding.
- Update your incident response plan. Add a scenario for "the agent did something unauthorised" — what gets disabled, who reviews logs, how customers and regulators are notified.
The Cyber Centre's existing Top 10 AI security actions (ITSAP.10.049) is a useful companion document for organizations earlier in their AI journey. Combined with the new agentic-specific guidance, it gives Canadian businesses a concrete checklist for the next 12 months.
If you are unsure where your organization sits today, our free cybersecurity assessment walks through all 13 Baseline Control areas in roughly 10 minutes and produces a prioritised list of gaps. The same assessment is also a reasonable proxy for whether you are ready to safely adopt agentic AI: most of the prerequisites in Appendix A of the guidance are graded by it.
Frequently Asked Questions
Who issued the new agentic AI security guidance?
Careful Adoption of Agentic AI Services was co-authored by the Canadian Centre for Cyber Security, CISA, the NSA, Australia's ASD-ACSC, the UK's NCSC, and New Zealand's NCSC-NZ — the Five Eyes cyber agencies. CISA and the Cyber Centre published it on May 1, 2026, with the document itself dated April 30, 2026.
Does this guidance apply to small Canadian businesses, or only large enterprises?
The document is written for "government, critical infrastructure and industry stakeholders," but the recommended best practices scale down to small and medium-sized businesses. Canadian SMBs that already use AI agents — even built into off-the-shelf SaaS products — are in scope of the privilege, behaviour, structural, and accountability risks the guide describes.
Is agentic AI the same as ChatGPT or Microsoft Copilot?
Not exactly. Generative AI tools like ChatGPT or Copilot produce content for a human to review and act on. Agentic AI uses the same underlying language models but adds tools, memory, and planning so the system can take actions independently — sending emails, modifying files, calling APIs. Many vendors are now layering agentic features on top of generative AI products, which is why the guidance recommends an inventory.
What is prompt injection, and why does the Cyber Centre keep mentioning it?
Prompt injection is an attack where malicious instructions are hidden inside data the agent reads — a phishing email, a web page, a calendar invite — that cause the agent to ignore its original instructions and do something harmful. Because agentic AI systems pull data from many sources and act on it autonomously, the guidance treats prompt injection as one of the most important risks to design against. Our guide on AI-powered phishing covers a related angle.
Where can I read the full document?
CISA hosts the guidance at cisa.gov, and Australia's ASD-ACSC hosts a parallel copy at cyber.gov.au. The PDF is 28 pages and is freely available under a Creative Commons licence.
Disclaimer: This article is intended for general informational purposes only and does not constitute professional cybersecurity, legal, IT, or compliance advice. While we strive to ensure accuracy, the cybersecurity landscape changes rapidly and information may become outdated. Organizations should consult with qualified cybersecurity professionals and legal counsel to assess their specific situation and develop appropriate security policies. Use of this information is at your own risk. See our Privacy Policy for more information.
Cybersecurity Canada is an independent resource and is not affiliated with, endorsed by, or connected to the Canadian Centre for Cyber Security, the Communications Security Establishment, or the Government of Canada.
How does your organization measure up?
Take our free cybersecurity assessment based on the Canadian Centre for Cyber Security's Baseline Controls. 50 questions, under 30 minutes, 100% confidential — your answers never leave your browser.
Take the Free Assessment