How making identity a priority can help bring clarity to AI chaos

How making identity a priority can help bring clarity to AI chaos

Many IT leaders lack confidence in their organisation's ability to recognise deepfakes.

By on

Cybercrime is today costing the APAC region US$1 trillion annually and with AI chaos expected to rise even higher, identity has never been more critical. AI-powered deception is now testing the limits of traditional fraud controls. A lack of risk-based trust assessment when implementing AI could open the door to data leaks, algorithmic bias and reputational damage.

In an interview with iTNews Asia, Johan Fatenberg, Director at Ping Identity, says identity is now a strategic control point, and that we should take the opportunity to equally fight fire with fire, by using AI to combat AI-based attacks. Companies can similarly invest in AI-based detection systems to spot deepfakes, abnormal user behaviour, and sophisticated fraud attempts.

Fatenberg also explains why enterprises must rethink their approach to identity security in the age of AI-threats, and gives his recommendations on critical identity strategies that enterprises can adopt.

iTNews Asia: What are the observations from how companies in APAC are falling prey to identity threats and attacks?

Fatenberg: For many APAC organisations, the speed of digital transformation is outpacing security controls. These organisations are grappling with fragmented systems, siloed data, and inconsistent protocols across vendors and channels. There's also limited interoperability between fraud, compliance, and customer systems. This lack of control is a red flag, especially as GenAI adoption accelerates.

IDC, for example, forecasts that up to 20 percent of enterprises will move to production with GenAI without a comprehensive risk-based trust assessment. This could open the door to data leaks, algorithmic bias, reputational damage, and hefty penalties.

Proposals to migrate from legacy systems to unified enterprise fraud management (EFM) platforms is generally complicated by architectural complexity. The key here is to leverage tools that facilitate the migration of identity to the cloud. This is a crucial step in digital transformation, as organisations are essentially equipping themselves to securely launch new apps faster and increase efficiency while reducing infrastructure and administrative costs.

iTNews Asia: Can you share real-life recent examples of deepfakes, abnormal user behaviour, and fraud attempts targeting enterprises?

Fatenberg: We're seeing first-hand how AI-powered deception is testing the limits of traditional fraud controls. Last year, a finance officer in Hong Kong was tricked into paying out US$25 million to fraudsters who used deepfake technology to pose as the company's CFO.

Meanwhile, just about a month ago in Singapore, a company's finance director almost lost over US$499,000 in a business-related impersonation scam that involved deepfake technology. Thankfully, the money was successfully retrieved by the Singapore Police Force in collaboration with their Hong Kong counterparts.

This tracks with our own findings that almost half or 48 percent of IT leaders lack confidence in their organisation's ability to recognise deepfakes. In fact, our survey of IT professionals also found that more than half of professionals believe AI will increase identity fraud risks, while two out of five organisations expect cybercriminals to significantly escalate AI-driven attacks over the next 12 months.

However, we can fight fire with fire. We can leverage AI to analyse text for anomalies, flag suspicious transactions, and automate case summarisation.

AI is both the threat and the defence. The question is whether your organisation has made that pivot to respond to this brave new world or whether you're still relying on controls designed for a long-gone threat landscape.

- Johan Fatenberg, Director at Ping Identity

iTNews Asia: What critical identity strategies can enterprises adopt?

Fatenberg: Identity is now a strategic control point, not just a security feature, and organisations must harness Identity Threat Detection and Response (ITDR) and Decentralised Identity (DCI). ITDR reinforces zero trust by continuously monitoring for anomalous identity activity across hybrid environments. DCI complements this by reducing reliance on centralised data stores; verifiable credentials are stored securely on user devices, limiting exposure in the event of a breach.

Streamlined access through single sign-on (SSO) improves user experience while reducing password-related risks. Some enterprises are going even further by eliminating passwords altogether, using cryptographic credentials to enhance security further, instead.

Together, these strategies allow enterprises to treat identity as a living system: adaptable to future risk landscapes, especially as AI reshapes access patterns.

iTNews Asia: If not managed well, can AI create even more risks in a real-world situation. How can AI be correctly used to fight AI?

Fatenberg: The key here is to pivot to a "trust nothing, verify everything" mindset. Rigorously confirming identities significantly reduces the risk of impersonation, credential abuse, and unauthorised access – even if they are powered by AI.

Behavioural biometrics, meanwhile, offers an additional layer of fraud detection. By employing pattern analysis on how users type, move their mouse, or navigate a site, organisations can identify subtle anomalies - which may indicate threats – in real time without disrupting user experience.

Risk-adaptive workflows take this a step further. Powered by AI, they dynamically adjust security measures based on the behaviour and risk profile of each session. This allows genuine users to move through systems with ease while subjecting potentially fraudulent activity to increased scrutiny.

iTNews Asia: What about the dangers from agentic AI? Do current processes need significant enhancement to handle AI identities?

Fatenberg: The dynamic nature of AI agents requires more sophisticated access controls that can adapt to changing tasks and responsibilities. Unfortunately, traditional IAM frameworks are designed to only authenticate and authorise human users, not autonomous AI-driven entities.

If an AI agent has excessive permissions, a security breach could allow attackers to exploit its privileged access and move across enterprise networks. Enterprises should therefore establish identification and lifecycle management for all managed AI agents interacting with their systems. This includes provisioning agents with unique client identifiers and preventing the agents from accessing corporate resources when no longer needed.

Understanding the different classes of AI agents based on their interaction methods will also help in assigning the appropriate permissions. Furthermore, assigning sponsors or custodians responsible for reviewing and recertifying agent access helps ensure accountability.

iTNews Asia: Can you share some best practices in building an AI-ready and more resilient authentication infrastructure?

Fatenberg: To establish trust and security in AI-driven environments, organisations should adopt four high-level principles for AI-aware identity and access management.

  • First, they must establish a framework for AI identity governance to define, track, and manage AI agents across systems.
  • Second, applying context-based access controls ensures that AI entities cannot override their assigned permissions.
  • Third, authentication and verification methods must be strengthened to confirm the legitimacy and behaviour of AI agents interacting with critical systems.
  • Finally, visibility and oversight must be enhanced through AI-powered monitoring, enabling real-time detection of anomalies and enforcing accountability across digital operations.

Building AI-ready identity security requires more than just extending traditional IAM practices: it demands a purpose-built approach to managing machine identities. Overseeing AI agent identities is vital for maintaining trust, accountability, and security.

AI agents should receive permissions according to the principle of least privilege, which means they should only access the minimum set of actions and resources necessary for executing their assigned tasks. When AI agents operate for human users, it's essential to utilise delegation processes rather than permitting the agent to mimic a human user.

iTNews Asia: What are the strategic pivots in their digital identity approach to thrive in an AI-chaotic future?

Fatenberg: As AI take on an increasingly prominent role in enterprise workflows, enterprises must rethink their approach to identity security. Tech-savvy consumers will begin holding enterprises accountable by insisting that steps be taken to protect their personal data in the era of rapidly evolving AI frauds.

Enterprises that verify rather than assume authentication, adopt user-oriented technologies, and align security with consumer expectations for transparency will lead the way. By aligning with these transformative factors, they can better manage emerging risks to establish a basis of trust and, by extension, guarantee enduring success and competitiveness.

To reach the editorial team on your feedback, story ideas and pitches, contact them here.
© iTnews Asia
Tags:

Most Read Articles