Organisations are leveraging AI to enhance security measures, but they must navigate significant risks associated with data protection and evolving threats.
Imperva’s senior vice president for Data Security GTM, Field Chief Technology Officer, Terry Ray told iTNews Asia that balancing AI innovation with vigilance is crucial in the ongoing battle between cybersecurity experts and hackers.
The main threats include AI-powered malware, ransomware, phishing and social engineering attacks, data theft, model poisoning and automated attacks.
AI can be used to find and exploit vulnerabilities in security systems, including web application firewalls and API gateways. Attackers can leverage AI to identify and bypass security controls more effectively.
“These AI-enhanced threats can be more sophisticated, adaptive, and difficult to detect compared to traditional cyber attacks. Staying ahead of these evolving threats requires a proactive and comprehensive security strategy,” Ray said.
Set robust control measures from the start
Many organisations are still in the early stages of effectively incorporating AI into their security frameworks.
Ray said it's crucial for these firms to establish robust controls and security measures even before deploying AI.
“Without precautions, AI systems can become vulnerable to exploitation.” he added.
For example, recent breaches have shown that attackers can bypass safeguards, as seen with incidents involving ChatGPT, where hackers manipulated the system to create malware.
He added that many implement security measures like web application firewalls and API gateways to protect the edge of their web applications, but they have done a "pretty poor job" of actually protecting their data.

“Organisations have focused on protecting web applications but neglected data security, leaving them unprepared for AI-powered data protection needs.”
- Terry Ray, senior vice president for Data Security GTM, Field Chief Technology Officer, Imperva
Make data security a critical focus
AI can boost security for applications and APIs, but the real challenge lies in data security.
Ray pointed out that many organisations struggle with data classification and lack a clear map of where sensitive data is stored.
He emphasised that effective data security involves scanning the entire environment—production, development, and testing—to find all locations of private data before integrating AI systems.
He also advised against assuming that data is only stored in labelled locations, like a "credit card server." He stressed that private data can be in many unexpected places within an organisation.
“The ultimate goal of the data classification process should be to create a comprehensive and reliable map of an organisation's entire data landscape,” he added.
AI can simplify cybersecurity
Terry highlighted how AI can streamline security management by automating rule creation.
While traditionally, security professionals needed specialised training to set up rules for various systems, AI has now enabled users to simply describe their needs in plain English, automatically generating the necessary rules.
“This allows for a broader set of users to interact with and manage the security systems, as they can use natural language to communicate their requirements,” he added.
AI-generated rules can also be easily corrected or modified if the user's intent was not properly captured initially.
This flexibility allows organisations to quickly adapt their security controls as their needs change, without requiring deep technical knowledge.
Ray believes that the ongoing cat-and-mouse game between attackers and defenders will persist despite advancements in technology.
However, AI's automation and broader skill applications will enable cybersecurity professionals to achieve more with less specialised expertise.
As organisations expand AI use beyond customer-facing applications to various systems and processes, the need for strong data security and classification will become even more critical, he added.