iTnews Asia
  • Home
  • News
  • Security

AI transforms cyberattacks, but human trust remains the weakest link

AI transforms cyberattacks, but human trust remains the weakest link

Modern scams are increasingly targeting human judgement rather than technical vulnerabilities.

By Abbinaya Kuzhanthaivel on Mar 12, 2026 8:19AM

Artificial intelligence is rapidly transforming how cyberattacks are carried out. From automated reconnaissance to highly personalised phishing messages, attackers are using AI to scale operations and make scams more convincing than ever before.

However, according to Righard Zwienenberg, Senior Research Fellow at cybersecurity firm ESET in conversation with iTNews Asia, the underlying goal of cybercrime has not changed. “The real target remains human trust and that continues to be the most exploitable vulnerability in modern organisations,” he said.

AI has not changed the intent behind cyberattacks, but it has dramatically improved efficiency across the attack lifecycle. At the reconnaissance stage, attackers can automatically analyse social media profiles, leaked databases, and public records to construct detailed victim profiles in minutes rather than days.

Generative AI enables attackers to craft messages that mirror local language, tone, and writing style, making them far more convincing, he explained.

Execution methods are also evolving. Instead of relying on obvious malicious links, attackers are increasingly using browser-based manipulation, malicious scripts, and AI-assisted business email compromise that guide victims through what appears to be a normal workflow.

Further at the extraction stage, Zwienenberg said AI enables automated credential harvesting and adaptive attack strategies. The biggest transformation he noted, is in the lure and execution stages where attackers increasingly influence victims’ judgement through familiarity, urgency, and trust.

The dangerous myth that we can easily discern a scam

One of the most persistent misconceptions among organisations is the belief that scams are still easy to recognise. Many companies assume malicious messages will contain obvious warning signs such as poor grammar, suspicious links, or unknown senders. In reality, modern scams are designed to blend seamlessly into everyday business communication.

Instead of exploiting technical vulnerabilities, attackers increasingly exploit human judgement under pressure. Messages often arrive through trusted channels and mimic legitimate workflows, making them difficult to detect using traditional filtering tools.

“The real danger today is not a technical failure, but a human decision made under pressure or false trust,” he added.

The changing nature of cyberattacks

Despite years of awareness campaigns and security investments, phishing and scam-driven attacks continue to dominate global threat landscapes. Social engineering exploits psychological triggers such as trust, urgency, and routine behaviour factors that technology alone cannot easily eliminate.

At the same time, phishing remains attractive to cybercriminals because it is cheap, scalable, and relatively low-risk compared with exploiting technical vulnerabilities.

The reason is simple, these attacks target people rather than systems, Zwienenberg said. As long as people remain the final decision-makers in most business processes, he believes phishing will remain one of the most effective attack techniques.

Another emerging concern is the rapid accessibility of voice cloning technology. Cybercriminals can now create convincing voice impersonations using short audio samples and inexpensive tools. This lowers the barrier to launching impersonation attacks that mimic executives or colleagues.

Defending against such attacks requires organisations to rethink identity verification. Recognising a familiar voice will no longer be sufficient proof of identity. Instead, Zwienenberg emphasised the need for process-based trust, including independent verification channels, structured approval workflows, and rehearsed response playbooks for urgent requests.

Looking ahead, he anticipates the next escalation in AI-enabled scams may involve multi-channel impersonation - coordinated attacks combining email, voice calls, and messaging that reinforce the same deception and pressure victims to act quickly.

Modern scams are also shifting away from traditional malicious files and attachments. In many cases, attackers manipulate victims directly within their web browser, guiding them through seemingly legitimate actions such as copying commands, verifying accounts, or resolving technical errors.

- Righard Zwienenberg, Senior Research Fellow at cybersecurity firm ESET

A growing example is the ‘ClickFix’ scam technique, where fake error messages or CAPTCHA prompts persuade users to perform actions that install malware or reveal credentials.

Because these attacks rely on human behaviour rather than identifiable malicious files, they can bypass traditional email security and antivirus systems designed to detect harmful attachments or links, Zwienenberg explained.

Another emerging threat he pointed out is a polluted AI ecosystem. “This occurs when AI systems are surrounded by unreliable or manipulated inputs, including misinformation, synthetic content, poisoned prompts, and misleading guidance,” he added.

The danger is that organisations often treat AI outputs as trustworthy shortcuts. If the underlying data or context is compromised, AI systems may generate persuasive but incorrect advice, potentially influencing real decisions related to security operations or financial approvals.

How can we build resilience in the AI era?

Traditional cybersecurity frameworks were built around detecting technical threats. But modern scams are increasingly trust events rather than technical incidents.

Zwienenberg argues that organisations must adopt three new priorities:

● Decision integrity: Sensitive actions such as payments or supplier changes must require independent verification steps.

● Behavioural telemetry: Security teams should monitor unusual requests, timing, and workflow anomalies rather than relying only on malware indicators.

● Practised human response: Organisations need regular drills focused on social engineering scenarios, not just ransomware incidents.

Ultimately, improving resilience means accepting that some socially engineered attacks will succeed and designing systems to contain and recover from them quickly.

The real metric of cybersecurity success

Zwienenberg believes organisations often place too much confidence in technical controls and early detection. Instead, maturity should be measured by how quickly companies can contain and recover from incidents, along with signals such as staff reporting suspicious requests early and verification steps becoming routine.

The most dangerous defensive assumptions, he warns, will soon look outdated: trusting a voice because it sounds familiar, assuming messages without malicious links are safe, or relying on annual security training to keep pace with evolving social engineering.

In the AI era of cybercrime, the critical challenge is no longer simply verifying identity. It is ensuring authenticity before trust turns into vulnerability.

To reach the editorial team on your feedback, story ideas and pitches, contact them here.
© iTnews Asia
Tags:
eset security

Related Articles

  • How severe will ransomware attacks become in 2026?
  • Identity is now the new cybersecurity battlefield
  • Why APAC organisations must rethink their cloud and AI security
  • Why is fragmentation the next big cybersecurity risk?
Share on Twitter Share on Facebook Share on LinkedIn Share on Whatsapp Email A Friend

Most Read Articles

AI transforms cyberattacks, but human trust remains the weakest link

AI transforms cyberattacks, but human trust remains the weakest link

Tips on how to harness AI to transform your DDoS protection into proactive cyber defence

Tips on how to harness AI to transform your DDoS protection into proactive cyber defence

Identity is now the new cybersecurity battlefield

Identity is now the new cybersecurity battlefield

Philippine education ministry hit by data leak exposing 210,020 records

Philippine education ministry hit by data leak exposing 210,020 records

All rights reserved. This material may not be published, broadcast, rewritten or redistributed in any form without prior authorisation.
Your use of this website constitutes acceptance of Lighthouse Independent Media's Privacy Policy and Terms & Conditions.