All papers
Industry researchRegulationAgentic AIA2A commerce

After EU AI Act 2.0: How Agentic Workflows Will Be Regulated by 2027

The original EU AI Act was already obsolete on agentic systems by the time it cleared parliament. The amendments specifically targeting agent-to-agent interaction are arriving faster than the industry expects, and they will reshape the deployer's compliance burden.

Teleperson Team · March 2026 · 9 min read

The European Union AI Act, in force across member states since 2024, established a risk-tiered regulatory framework that the rest of the world has been benchmarking against. It also did something the rest of the world has been less willing to acknowledge: it was already obsolete on agentic systems by the time it cleared parliament. The Act's general-purpose AI provisions assume a deployment model in which a model is invoked by a human, produces an output, and the human acts on the output. They do not contemplate systems that act autonomously, transact on a principal's behalf, or coordinate with other systems to commit a binding action.

The amendments addressing this gap, informally referred to in policy circles as "AI Act 2.0," though the formal designation will be different: are now in active development across the European Commission, the European Parliament, and the AI Office, with significant input from member-state regulators. We expect the substantive provisions to be in force by mid-2027. This paper surveys what is coming, why deployers should plan for it now, and how the comparable jurisdictions (US federal and state, UK, Singapore, Canada) are responding.

What the original Act missed

The 2024 Act categorizes AI systems into four risk tiers (unacceptable, high, limited, minimal) and applies obligations accordingly. The tiers were defined against a model of AI as a tool: a system that produces predictions, classifications, or generations on which a human acts. Compliance focuses on data quality, transparency, human oversight, and post-market monitoring of the model's outputs.

Agentic systems do not fit this model cleanly. They do not produce outputs that a human acts on; they produce actions that a human supervises or, increasingly, does not supervise at all. The relevant compliance question shifts from "is the output safe to act on" to "was the action permissible to take." The Act's transparency obligations, designed for outputs, do not map cleanly to actions. The human-oversight requirement, designed for high-risk classification systems, does not specify what oversight means when an agent transacts at machine speed across thousands of interactions per minute.

These gaps are not theoretical. They are visible in every member-state regulator's complaint queue. As agentic systems have shipped into consumer-facing roles: particularly in financial services, telecommunications, and customer-service contexts, the existing Act has provided regulators with limited tools to address the most consequential failure modes: unauthorized binding actions, opaque agent-to-agent transactions, and consumer-protection violations that fall between the original Act's categories.

What is coming in the amendments

Three substantive provisions are converging in the active drafts. Each will reshape what deployers of agentic systems must build.

An "agentic system" classification. The amendments define agentic AI as a distinct category of system, characterized by autonomous action on a principal's behalf, tool use, and the capacity to commit binding transactions. Systems meeting this definition will face additional obligations beyond the existing risk-tier requirements: explicit declaration of bounded authority, mandatory transaction-level logging in a regulator-accessible format, and required principal confirmation flows for binding actions in consumer-facing deployments.

An interoperability and identity standard. The amendments require agentic systems transacting in EU markets to implement a common identity protocol: the technical specification is still under development, but the policy intent is clear: any agent acting on a consumer's behalf must be cryptographically identifiable to its counterparties, and its principal must be verifiable. This effectively mandates KYA at the regulatory level for any system operating in EU markets, which will accelerate the adoption of identity standards globally regardless of where vendors are headquartered.

A liability framework. The most consequential provision, and the one most actively contested in the drafting process. The amendments establish a default liability rule for agentic systems: the deployer is liable for the agent's actions absent specific exemptions. The exemptions follow the pattern of common-carrier and platform-immunity regimes, but with a narrower carve-out: deployers can disclaim liability for agent actions only if they have implemented specified trust-layer requirements (bounded authority, signed receipts, watcher classification, principal confirmation flows for binding actions). This effectively makes the trust layer compulsory for any deployment that wants to limit its own liability exposure.

Comparative posture in adjacent jurisdictions

The EU is the most active regulator on agentic systems, but it is not the only one, and the divergence between jurisdictions creates a meaningful compliance burden for any vendor operating across borders.

The United States has chosen a deregulatory federal posture in 2025–2026, with the previous administration's executive orders on AI safety largely rescinded. The compliance gap is being filled by state-level legislation, particularly California (SB-942 and successor bills), Colorado (the Colorado AI Act, which has been amended to address agentic systems specifically), and New York (financial-services regulation through the Department of Financial Services). The patchwork is real and will get worse before it gets better. Federal pre-emption is being discussed but has not landed.

The United Kingdom has continued its sector-led approach, with the Financial Conduct Authority, Ofcom, and the Information Commissioner's Office each taking responsibility for agentic systems within their respective sectors. The advantage is regulatory specificity; the disadvantage is fragmentation across sectors. We expect a unified AI bill in the next parliamentary session, but its scope on agentic systems specifically remains uncertain.

Singapore and Canada are quietly leading on practical implementation. Singapore's Monetary Authority has published the most detailed agent-conduct guidance for financial-services applications globally; Canada's pending AI and Data Act includes specific provisions for agentic systems that resemble the EU draft amendments. Both jurisdictions are under-discussed in policy circles dominated by EU-vs-US framings, and both will be material markets for agent-to-agent commerce.

What deployers should do now

The compliance horizon is roughly twelve to eighteen months for the EU amendments and somewhat longer for the comparable jurisdictions. The right posture for deployers is to build to the strictest emerging standard now, on the assumption that the others will converge toward it. Specifically:

Implement bounded-authority declarations at the agent level, in a machine-readable format. The technical specification will eventually be standardized; the practice should be in place now.

Implement signed-receipt generation for every transaction, with receipts retained in a format that is verifiable by counterparties and inspectable by regulators. Logging is not sufficient; the receipt must be a discrete artifact.

Implement watcher classification with a maintained taxonomy of binding actions. Surface principal confirmation for any action the watcher classifies as binding. Document the watcher's operating envelope and false-negative rate.

Implement identity declarations for agents transacting in your system, even if the standard is still in flux. The cost of supporting multiple identity formats during the transition is meaningfully smaller than the cost of retrofitting to a single standard after the fact.

Plan for liability exposure. The default rule in the EU amendments, and likely in the comparable jurisdictions, is deployer liability with trust-layer exemptions. Insurance for agent actions exists but is immature; pricing will move significantly as case law develops. Capital reserves and contractual indemnification language with counterparties should reflect the emerging exposure profile.

Why this matters strategically

A note that goes beyond compliance. The regulatory environment is the most underweighted variable in agentic-AI competitive analysis. Vendors that build to the strictest emerging standard will have a structural advantage when the regulations land: they will be deployable in the most demanding markets, attractive to enterprise buyers whose procurement processes already incorporate the coming requirements, and shielded from the public-incident risk that will catch competitors who deferred trust-layer investment.

Vendors that defer the build will face a forced compliance project on a regulatory timeline that does not bend to product roadmaps. The cost of that project, summed across architecture, engineering, and lost market access during the build, is materially larger than the cost of building to the standard from the start.

We have written elsewhere about the trust layer as a moat. The regulatory environment is what will turn that moat from optional infrastructure into the price of admission. Vendors planning their 2026–2027 roadmaps should plan accordingly.