Across the Asia Pacific region, organisations are racing to integrate artificial intelligence (AI), no longer as an experiment to see what is possible, but as a core capability for unlocking productivity gains and competitive advantage.
The market is now moving from experimentation to scaled deployment. Yet in this rush, a troubling pattern is emerging, where businesses are prioritising rapid implementation over foundational security. It is also creating a widening gap between adoption speed and security readiness
Gartner predicts that by the end of this year, about 40 percent of enterprise applications will be integrated with task-specific AI agents, up from less than 5 percent today, underscoring how quickly agentic AI is moving into the mainstream.
Gartner’s own surveys of IT leaders show that only a small minority are willing to consider or deploy fully autonomous agents today, largely due to concerns about governance, trust, and new security attack surfaces.
While AI can present many opportunities, we can only fully capitalise on them if we build our innovations on a foundation of trust. We need to build a secure data foundation at the very start so that organisations can explore new frontiers with greater assurance and confidence.
Evolving risk from "Garbage in" to "Agentic threats"
The technology industry’s adage "garbage in, garbage out" has evolved from a data quality concern into critical business risk. Feed an AI system poor-quality data, and it will produce hallucinations or flawed insights that can misguide crucial decisions.
At the same time, the threat landscape has only gotten more sophisticated. Worries about AI code poisoning and supply chain vulnerabilities have moved beyond theoretical to tangible. Research from Anthropic, conducted with the UK AI Security Institute and the Alan Turing Institute, found that as few as 250 malicious documents are enough to provide a “backdoor” vulnerability in a large language model.
All it takes to compromise an entire AI system is a small number of poisoned inputs, regardless of model size or sophistication.
However, the average organisational risk profile is also undergoing a fundamental shift. As AI becomes "agentic", in that it acts autonomously to execute tasks rather than simply responding to prompts, threats have evolved from passive data leaks to active operational sabotage.
An AI agent does not just hold sensitive information; it can act on that information, making decisions, executing transactions, and interacting with other systems without human oversight.
The impact of such breaches is already measurable and significant; in Deloitte’s view, the real cost of shadow AI is not just the breach itself but the governance gap it exposes. Unauthorised AI tools introduce opaque decision flows and data access paths that make containment, forensics, and compliance responses significantly more expensive and time-consuming than well-governed incidents.

These agentic threats now represent board-level risks. A compromised model can lead to intellectual property (IP) theft, supply chain disruption, fraudulent transactions at machine speed, and catastrophic reputational damage when customers discover they have been interacting with a compromised system. The stakes have never been higher.
- Stephen McNulty is Senior Vice President, Asia Pacific, OpenText.
Core philosophies of security in the build phase
Cybersecurity cannot be an afterthought patched on after deployment. It must be "shifted left" — integrated directly into the data architecture during the design and build phase, instead of bolted on as an expensive retrofit later.
This requires building for resilience from the ground up. Secure-by-design principles must be baked into the data foundation to ensure a trusted platform. This is the only way to safely scale AI innovation without creating additional risk.
While regulators, including those involved in Singapore's Global AI Assurance, are establishing the necessary guardrails and frameworks, the ultimate responsibility for securing specific AI implementations and their data foundations rests squarely with the business itself. Regulatory compliance is the floor, and not the ceiling; forward thinking organizations must exceed their baselines to truly protect their competitive advantage.
Three strategic pillars for a secure data foundation
- #1: From Cloud-First to Sovereign First
The operating mantra for CIOs is being rewritten from a cloud-first to sovereign first approach, while aligning with business needs. By prioritising data sovereignty, organisations ensure that their most sensitive information remains subject to local laws and protections, by collaborating with trusted local partners to place data in-country, creating a legal and physical "air gap" against jurisdictional overreach.
From a business perspective, this approach mitigates the risk of "data bleeding," where institutional knowledge is inadvertently used to train external, third-party models that could eventually benefit a competitor.
Adopting a sovereign-first approach, often through hybrid models where "crown-jewel" datasets remain on-premises or within high-security local zones, allows businesses to innovate with AI while maintaining absolute ownership of their competitive intelligence. It provides the legal certainty required to scale AI across borders without compromising the sanctity of the data that fuels it.
- #2: Intelligent threat detection and integrity
In the AI era, the definition of a "breach" has changed, as modern security moves beyond traditional perimeter defense (which focuses on who is getting in) to continuous integrity verification, which focuses on what is happening to the data itself.
The business risk here is profound: if an AI model is fed "poisoned" or subtly altered data, the resulting insights can lead to catastrophic strategic errors, from flawed financial forecasting to biased automated customer interactions.
Intelligent threat detection must act as a real-time immune system, using machine learning to identify anomalies in data patterns that a human eye would miss. This creates a "virtuous cycle" of security: Using AI to defend the very data that trains your AI.
By ensuring the absolute integrity of the data pipeline, leaders can trust that their AI-driven decisions are based on facts, not artifacts left by malicious actors.
- #3: Resilience through an identity-first approach
As AI becomes agentic, the most critical security perimeter is identity. In today’s enterprise, users are increasingly non-human; bots, APIs, and autonomous AI agents now outnumber employees. If these identities are over-privileged, a single compromised agent can become a gateway to the entire corporate ecosystem.
A strategic identity-first approach focuses on the lifecycle and behaviour of these machine identities. By applying the principle of least privilege to every AI agent and granting only the specific access required for a specific task, organisations can effectively limit the "blast radius" of any potential incident.
This approach around architectural resilience ensures that if one component of the system is compromised, the rest of the business remains insulated. In this model, identity becomes the fundamental control plane that allows for bold experimentation without the fear of systemic failure.
Choosing solid ground
Business leaders today face a choice: pursue sustainable progress built on secure data foundations, or rush ahead with unsecured innovation that promises short-term gains, but courts long-term disaster.
Organisations must take a long view; vulnerabilities introduced today will require costly remediation tomorrow. Conversely, patience and architectural discipline in the short term will yield trust, resilience, and leadership in the long term. The businesses that lead the industries of tomorrow will be those whose AI systems are not just powerful, but demonstrably trustworthy as trust becomes the ultimate competitive advantage.
Stephen McNulty is Senior Vice President, Asia Pacific, OpenText.




