2026: Identity, Trust and the Rise of the “My AI Did It” Excuse

The Independent
5 m read
2026: Identity, Trust and the Rise of the “My AI Did It” Excuse
Technology

SINGAPORE: By now, Singaporeans are quite used to hearing about the Smart Nation vision. Cashless this, digital that. But the next phase is something entirely different. In the coming years, we won’t just use AI tools — we’ll have AI agents acting on our behalf, making decisions, shifting money, and even signing documents while we sleep.

Useful? Definitely.
Frightening? Also yes.

And as Ee Khoon Oon, VP & MD, APAC, puts it bluntly, “autonomous systems will need a human anchor in APAC’s digital economy.” His point is simple: when your AI makes a decision, there must be no ambiguity about who stands behind it.

Because the day will soon come when an AI in Singapore sends your money to a Thai fund or buys a digital asset in Manila. If something goes wrong, claiming “my AI did it” will not save anyone. Regulators will still come looking for the human — or the institution — behind the machine.

This is why the old idea of KYC (Know Your Customer) is no longer enough. As autonomous agents emerge, we need a new concept: KYA — Know Your Agent.

Ee explains it crisply: “If an AI agent is going to act on your behalf, then its identity must be as verifiable as yours.” In other words, if machines are going to behave like mini-employees, they need to be treated like mini-employees — identifiable, traceable, and accountable.

Singapore is already nudging the region in this direction. MAS has been signalling strongly through the FEAT principles, the Veritas initiative and the new AI Risk Management Guidelines. The message is clear and consistent: AI autonomy must come with oversight. No exceptions.

In the near future, every authorised AI agent will need its own verified identity, tied back to a real human’s biometric. A clear chain of custody — from intent in Jakarta to execution in Hong Kong — becomes essential for cross-border trust.

And this isn’t theoretical. It’s about to become a daily reality across ASEAN. Once digital transactions are fully automated in 2026, the entire region will need a unified understanding of how to attribute actions — especially across markets with different regulations and risk appetites.

In APAC’s next transformation phase, the winners aren’t going to be the quickest to deploy AI. They’ll be the ones people actually trust.

And trust, ironically, is becoming harder to earn just as AI becomes more powerful.

Robert Prigge, CEO, summarises the stakes neatly: “In 2026, identity will either be your company’s strongest differentiator, or its weakest link.” His warning reflects a hard reality — attackers are now using tools that didn’t exist two years ago, while many firms are still relying on passwords, static data and one-time checks.

In that kind of arms race, outdated identity systems don’t just slow you down — they expose you.

This is where the region will see a major shift toward continuous identity verification and dynamic trust profiles. As Prigge adds, “When fraud happens, customers don’t blame the criminal — they blame the brand.” That alone explains why companies are scrambling to rethink their identity frameworks.

Meanwhile, Bala Kumar, Chief Product and Technology Officer, sees 2026 as a turning point: “Reusable identity will move from buzzword to operational reality.” Once someone is verified with high assurance, their identity becomes portable — no more repeating the entire onboarding ritual from scratch.

He calls it the moment “authentication collapses into onboarding.” A user verifies once; the trust follows them everywhere.

But not everyone will benefit. Bala notes that reusable identity only works if a company already has the scale, the data rights, and the deep biometric infrastructure required. “You can’t retrofit this, and you can’t fake critical mass,” he says.

Across other sectors, the shift toward identity-led trust is just as pronounced.
Social platforms, drowning in deepfakes, impersonation, and safety regulation, are beginning to adopt stricter, AI-driven identity checks. Reinhard Hochrieser expects “age estimation paired with strong liveness detection” to become the new norm, with digital identity wallets emerging to minimise unnecessary data sharing.

This theme repeats across experts: privacy, trust, and identity are merging into one integrated concern. Joe Kaufmann, Global Head of Privacy, puts it plainly: “Businesses must collect enough data to meet compliance — but not so much that users feel exposed.” The era of over-collection is ending, replaced by purpose limitation and privacy-by-design.

Even fraud prevention is evolving. Ashwin Sugavanam warns that AI has made it “alarmingly easy to create synthetic identities.” Defensive AI will need to outthink adversarial AI, not just block it. That means multimodal signals, cross-customer fraud intelligence, and real-time detection — not after-the-fact apologies.

Put all these viewpoints together and a clear picture emerges.

Identity is no longer a checkbox. It is the infrastructure of the future.

It determines how AI agents act, how transactions move, how fraud is blocked, how consumers trust a brand, and how cross-border digital economies function.

The “My AI did it” excuse may soon enter the lexicon — half joke, half genuine confusion. But as regulators, technologists and companies across APAC keep repeating, it won’t hold up.

In this new age of autonomous systems, one thing remains constant:
somebody must be accountable.

And the entire system — from regulators to financial institutions to social platforms — is racing to ensure they know exactly who.

The Independent

Contributing writer at The Independent News