A classic Tom & Jerry scene unravels: Jerry emerges from his hideout, only to immediately get slammed in the face as Tom flings his door open. Jerry retaliates by slamming the door back in Tom’s face, then runs away quickly to escape from Tom’s fury. The great chase continues with Jerry constantly outsmarting Tom, while Tom trips, falls, and injures himself as he seeks to get his revenge.
The Tom and Jerry chase feels akin to our relationship with cybercriminals in today’s online world. Cybercriminals present as Jerry: quick-witted, sophisticated, and always one step ahead in their social engineering tactics, while the rest of us play Tom, stumbling over ourselves to keep up with increasingly agile threats to defend ourselves.
As if things were not complicated enough, AI then enters this proverbial cat and mouse game. AI has made our lives easier and more convenient, but it has also turbocharged scammers’ ability to improve on their quantity and quality of scams: essentially, anyone who has access to the Internet can potentially fall victim to such crimes. AI has no loyalty to Tom or Jerry, but Jerry may be the one getting more "joy" out of the technology.
And the data validates this. According to a LexisNexis study, which looked at how AI is increasingly used to commit cybercrime in digital transactions globally, APAC has been identified as the region with the most attacks of fraud, with human initiated or social engineering attacks up by 61 percent YoY, and robotic or bot attacks up 6 percent.
Given how widespread and advanced these threats have become, especially with the rise of AI, it is crucial to examine how this technology both helps and hinders our cybersecurity efforts, and what the law is doing to catch up.
AI: The cybersecurity frenemy
Having an AI-powered security system is like having a super-intelligent bloodhound that never sleeps. It can spot anomalies in network traffic faster than your best analyst, automate routine threat detection, and even help predict future vulnerabilities based on patterns of attack.
Just over the fence, cybercriminals have unrestrained access to the same underlying technology. With generative AI they have created incredibly convincing phishing emails, cloned executive voices, and generated deepfake videos that can trick even the most skeptical eye.

If a CEO has ever spoken in public, or even on a webinar, that might be enough audio and video data to generate a fraudulent endorsement of an investment scheme that clients, partners, and even staff could fall for.
- Yuankai Lin, Partner, RPC Premier Law
Aside from creating fraudulent endorsements, scammers have leveraged off generative AI to exploit the sense of urgency within human psychology. Through cloned executive voices and generated deepfake videos, scammers prey on employees' innate sense of urgency by creating calls from the C-suite demanding immediate fund transfers.
Even if such requests for fund transfers are inconsistent with the company's past practices or protocols, scammers target the vulnerability and fear of people who believe their job to be at stake and force them into action.
That is why businesses need more than firewalls and antivirus software. Legal teams and CISOs need to collaborate to create multi-layered defences. This could entail setting clear internal policies and training such as having AI usage policies within the organisation, watermarking official communications and conducting regular phishing simulations to train staff.
From a legal perspective, incident response plans will have to be updated to deal with the possible threats emanating from deepfakes and synthetic media. Corporate communication protocols, employee codes of conduct and vendor contracts should likewise be reviewed.
Along with Standard Operating Procedure (SOPs) for companies to include guidelines for making requests, such as transferring money, it should also be reiterated in training that CEOs or other members of the C-Suite would never be in contact to make such requests.
For high-stakes transactions, it is best to keep it analogue with face-to-face confirmation, even if online. But internal vigilance alone is not enough. Even if your defences are robust, vulnerabilities often enter through less visible channels — especially via third-party partners and vendors.
You have battened down the hatches, but what about the back door from third parties?
Now, even with the house in order, the dozens of suppliers and partners depending on each other globally may unknowingly put a target on their neighbour’s back. Supply chains have become a favourite entry point for attackers.
Third-party cybersecurity risk is a growing problem, especially in highly interconnected industries like finance, manufacturing, and logistics. Many breaches seen today do not start at the target organisation, but with a third-party vendor who did not patch a system or failed to detect a malware infection.
Truthfully, more often than not, companies falling behind on the latest security updates have become a quiet norm for the sake of cost or convenience. For example, in Singapore, the country’s Personal Data Protection Act does not require organisations to deploy state of the art security systems – just "reasonable measures" would suffice.
Depending on the volume and type of personal data involved, reasonable measures could simply entail implementing multi-factor authentication and the latest security updates. Many companies appear to still be falling short of this standard.
Start knowing each digital neighbour by maintaining a live, accurate list of third-party vendors who have access to systems and data and implement a tiered risk assessment model. Contractually, clear cybersecurity clauses like baseline security requirements, breach notification timeframes, and audit rights will go a long way.
Across the region, the growing complexity in cyber risk management, both internal and external, has prompted regulators in Singapore and Hong Kong to respond with clearer guidelines, placing more responsibility on businesses to prepare and react effectively.
Singapore: A regulatory reality check
In Singapore, regulators such as the Personal Data Protection Commission (PDPC) have always had the expectation that companies not only prevent breaches but know exactly what to do when one happens.
Even a cursory look through the PDPC's Guide on Managing and Notifying Data Breaches makes one thing clear: incident response needs to be fast, coordinated, and well-documented.
So, now is the time to establish cross-functional coordination between legal, IT, HR, and communications for a rehearsed and ready incident response plan because regulators are assessing governance. That means breach logs, decision trails, and evidence that cybersecurity is being taken seriously at the board level.
Ultimately, no matter how comprehensive the defences or detailed the regulatory frameworks, the cat-and-mouse game with cybercriminals is far from over — and AI may well be playing both sides.
Is AI a friend to all or a friend to none?
In today’s digital landscape, the cat and mouse game between defenders and cybercriminals has become relentless and increasingly high-stakes. While AI has become a powerful ally in threat detection and prevention, it has also supercharged the tools available to scammers, making breaches not just a possibility but seemingly an inevitability. The question seems to be a matter of not if, but when.
The priority should always be on prevention, but equal focus needs to be given to preparation and resilience. That means building strong incident response plans, embedding cybersecurity into legal and operational frameworks, and cultivating a security-conscious culture that is felt across every level of the organisation.
Regulation will always lag innovation. But businesses do not have to. By acting decisively with cross-functional collaboration, transparent governance, and smart use of technology, we can outpace the next threat, even if we are will always be one step behind overall.
In the end, the goal is not to be faster than Jerry. It’s to be smarter, better prepared, and harder to fool.
Yuankai Lin is Partner at RPC Premier Law, an international law firm