Agentic AI — systems capable of autonomous, multi-step decision-making without continuous human oversight — is rapidly moving from experimental use cases into production environments. As enforcement of the EU AI Act approaches its 2 August 2026 general application date, the governance challenges posed by these systems are becoming a central concern for compliance teams.

The core problem is traceability. AI agents can act without a clear record of what, when, and why they undertook their tasks. If an organisation cannot trace an agent’s actions and does not have proper control over its authority, it cannot demonstrate to regulators that a system is operating safely or lawfully. This creates a direct tension with the AI Act’s requirements for documentation, transparency, and human oversight — particularly for systems classified as high-risk.

Agent risks are governed by the Act’s provisions for both general-purpose AI (GPAI) models and high-risk systems. Since most current agents rely on GPAI models with systemic risk, model providers must assess and mitigate systemic risks from AI agents. However, agents can also qualify as high-risk systems depending on their specific use case, especially where personally identifiable information is processed or financial operations take place. The penalties for failures of governance in these areas are substantial.

The EU AI Office has indicated that 2026 guidance will focus on high-risk classification, provider and deployer obligations, substantial modification, value-chain responsibilities, post-market monitoring, and Article 50 transparency. For organisations deploying agentic AI, this means building audit trails, defining authority boundaries, and ensuring that human oversight mechanisms are not merely nominal.

Acompli perspective: Agentic AI amplifies the need for structured governance documentation. Organisations should be conducting impact assessments that specifically address autonomous decision-making, mapping the data flows and third-party dependencies involved in agent architectures, and establishing clear escalation paths for when agents operate outside expected parameters. The AI Act does not require perfection, but it does require evidence of a systematic approach to risk management — and that starts with knowing what your AI systems are doing.