As artificial intelligence (AI) systems evolve from simple tools to autonomous digital actors, enterprises are facing a question that cuts to the heart of governance and regulation: when an AI agent makes a decision, who is responsible for the outcome?
In conversation with iTNews Asia, Tobin South, Head of AI Agents at WorkOS, Research Fellow with Stanford’s Loyal Agents Initiative, and Co-Chair of the OpenID Foundation’s Artificial Intelligence Identity Management Community Group shared insights on the growing accountability crisis in AI and why autonomous agents are advancing faster than the legal and security systems meant to govern them.
The immediate vulnerability, South explains, is that most AI agents today impersonate users rather than act as explicitly delegated entities.
“When an agent accesses your company’s CRM or financial systems, it often does so in a way that’s indistinguishable from you personally taking that action,” he said.
“That creates massive accountability blind spots, if something goes wrong, there’s no clear audit trail showing it was the agent, not you.”
According to South, this design flaw is more than a technical oversight – a potential regulatory and legal catastrophe. Without a verifiable digital record separating human and machine actions, companies risk exposure to fraud, compliance violations, and data breaches that are effectively untraceable.
Are we repeating mistakes of the early internet?
South draws a sharp parallel to the early internet era, when security was an afterthought. He said that the industry is repeating the same security mistakes made during the early days of the web, only this time at unprecedented speed and scale.

We’re repeating history at 100x speed with 1000x the stakes. We spent decades retrofitting HTTPS and authentication. AI agents are now spreading across global systems in a matter of months, and by the time the security flaws become obvious, they’ll already be embedded in hundreds of millions of workflows.
- Tobin South, Head of AI Agents at WorkOS and Co-Chair of the OpenID Foundation’s Artificial Intelligence Identity Management Community Group
Unlike the defacement of a website, compromised agents could autonomously conduct financial fraud, manipulate medical records, or even spawn new agents before anyone realises something has gone wrong.
While AI’s rapid adoption fuels innovation, it also fuels fragmentation. Major technology platforms are rushing to release their own proprietary identity systems, fragmenting the ecosystem and multiplying security risks.
“Without convergence on open standards, we’re creating a landscape where interoperability becomes impossible and vulnerabilities multiply,” he cautioned.
South also emphasised that attribution is now becoming one of the most urgent challenges. “Preventing malicious AI actions from being blamed on innocent users requires a digital paper trail where every action is stamped with both the bot’s identity and the human who authorised it, like a security camera that never blinks.”
Such auditability, South argues, is the cornerstone of accountability. Without it, trust in AI systems could collapse under the weight of legal disputes and data integrity failures.
Recursive delegation: Innovation meets risk
As AI agents begin to delegate tasks to other agents, also called as recursive delegation, traditional identity and permission systems begin to unravel. “It’s what enables sophistication,” South said, “but it’s also where our current authorisation frameworks completely fall apart.”
Companies deploying agents with cross-domain access, particularly in sectors like finance or healthcare, face even steeper challenges. “The security systems protecting your data today were built for humans who click buttons, not AI that can make thousands of decisions per second,” he warned. “It’s like giving your house keys to someone who can teleport.”
At the root of the crisis, lies a lack of shared trust frameworks between organisations. “Without them, every other security mechanism including authentication, authorisation, and audit becomes significantly weaker,” South explained.
Rethinking identity for the agent
Ultimately, South believes the entire lifecycle of identity management must be reimagined for agents. He said that the path forward begins with agent-native identity systems that can dynamically provision, delegate, and revoke permissions in real time, coupled with cross-industry standards for trust and accountability.
“The accountability gap in AI agent ecosystems isn’t a distant concern, it’s unfolding now. As organisations rush to deploy autonomous systems across their operations, they may be building a future where legal responsibility, security, and trust are dangerously misaligned”.
He cautioned that by the time we realise these gaps are baked in, “it may be too late to retrofit accountability back into the system.”





