As enterprises across Asia-Pacific scale their use of generative and agentic AI, security leaders are raising fresh concerns about a largely overlooked risk, the AI prompt and interaction layer. While AI is unlocking productivity and margin gains, it is also creating a new, fast-evolving attack surface that traditional security frameworks are not fully equipped to handle.
In conversation with iTNews Asia, Fabio Fratucello, Field Chief Technology Officer, World Wide at CrowdStrike shared insights on why enterprises must start treating the AI prompt layer as a frontline security concern, warning that attackers are already exploiting weaknesses in how AI systems interpret instructions.
According to Fratucello, the growing concern is prompt injection, where attackers manipulate AI systems by crafting malicious inputs that alter model behaviour or bypass safeguards.
“This technology is providing an avenue to customers from a business standpoint, but this also extends the attack surface. There are models, workloads, agents, prompts, and all of those require protection,” he explained.
The executive further likened prompt injection to phishing, one of cybersecurity’s longest-standing threats. “If you think of phishing and emails, that was the avenue where attackers were targeting the human. Now it’s happening between the human and the machine, or between machines,” he said.
Fratucello added that prompt injection could become AI’s equivalent of phishing because of its low barrier to execution and high scalability.
The new challenges from AI agents
Fratucello said enterprises also need to rethink how they view AI agents, which are increasingly acting as digital workers inside organisations. This creates a new governance challenge, particularly when agents are granted privileged access to enterprise systems and datasets.
“With high power… comes potentially high privilege and access to extremely rich datasets and information,” Fratucello said. He stressed that organisations need to apply strong guardrails and runtime protections to monitor how these agents behave.
One of the biggest challenges, Fratucello noted, is the lack of visibility into how AI systems behave once deployed.

You write a prompt, you receive an output, but you don’t have visibility of what is being thought and what is being executed.
- Fabio Fratucello, Field Chief Technology Officer, World Wide, CrowdStrike
This makes AI systems difficult to monitor without specialised controls. He stressed the importance of runtime monitoring, which enables organisations to observe agent activity such as commands, scripts, file access, network connections, and application behaviour.
“We need the ability to understand AI behaviour at the point of execution. It is essential for detecting misuse and enabling security teams to respond quickly,” he explained.
Beyond managed deployments, Fratucello highlighted the rise of shadow AI as a growing blind spot. “These are AI capabilities existing inside the organisation, but they’re not approved, not sanctioned, and they may pose a risk because they don’t have the right visibility and governance.”
This can include unmanaged AI applications, plugins, models, runtimes, and development tools introduced without formal review, he added.
While many organisations are looking for secure-by-design AI blueprints, Fratucello cautioned against waiting for ideal frameworks before acting. “If we’re waiting for the perfect solution, we will fall behind,” he said.
Given the pace of innovation, he argued that security must evolve in tandem with adoption. “Security needs to run in parallel with the slope of technology innovation,” he said. Instead of delaying AI adoption, organisations should prioritise visibility first, followed by prevention and response capabilities.
The need for agentic security operations
As adversaries increasingly operate at machine speed, Fratucello said traditional security operations must also evolve. He pointed to the emergence of agentic security operations centres, where AI-powered systems and human analysts work together to improve response times.
“The platform will provide agentic security capabilities that allow organisations to respond at the same speed,” he said. This also includes automating repetitive security tasks such as threat intelligence gathering, malware analysis, and investigation workflows.
A balancing act: speed vs risk
As organisations push ahead with AI adoption for competitive advantage, many are implicitly accepting high levels of risk. However, Fratucello cautioned against treating this as a trade-off. “It’s not a question of whether to adopt AI, everyone already has AI. The question is how to adopt it in a safe and considered manner,” the executive said.
For years, enterprise security focused on endpoints, identities and networks. Fratucello believes AI prompts and interaction layers now deserve equal scrutiny.
As AI systems become embedded into daily business operations, the instructions they receive and how they interpret them could become one of the defining cybersecurity challenges of the next decade.




