AI is propelling innovation across sectors through generative content, intelligent automation, and real-time decision-making. But with this rapid evolution comes heightened exposure to emerging cyber threats.
In 2024, Akamai tracked 51 billion web attacks on conventional and AI-powered applications in Asia Pacific and Japan, a 73 percent surge year over year. The financial services and commerce sectors alone saw 27 billion and 18 billion attacks, respectively.
These two critical industries are central to the region’s digital economy. Financial services alone account for over 14 percent of Singapore’s GDP and play a foundational role in APAC’s digital infrastructure. E-commerce, meanwhile, drives close to half of global sales transactions, worth US$1.8 trillion annually. Their scale and reliance on sprawling hybrid infrastructure, APIs, and real-time interactions make them attractive, high-value targets.
As enterprises increasingly adopt LLM, agentic AI, and GenAI across their operations, securing these advanced systems is no longer just a technical requirement but a fundamental economic necessity.
Why AI-driven applications are under siege
Unlike traditional systems, AI models process dynamic, unstructured data and operate in probabilistic, non-deterministic ways. This makes them vulnerable to a whole new class of threats: prompt injection, model extraction, data poisoning, inference manipulation, and aggressive scraping. New vulnerabilities in LLMs are also being discovered more often and exploited in record time.
Yet, many organisations persist in using outdated tools such as conventional web application firewalls (WAFs). These tools not only fail to detect or defend against these novel attacks, but they can also result in blind spots in visibility, unpredictable model behaviour, and major security gaps.
The API visibility gap
APIs form the connective tissue for AI ecosystems, enabling interaction with data sources, tools, and services. However, many enterprises continue to lack comprehensive, real-time visibility into these interfaces.
Akamai’s API Security Impact Report found that nine out of 10 of global organisations experienced an API-related incident in the past year. In APJ, each API incident costs enterprises an average of US$580,000. APIs supporting AI models are especially risky - they evolve fast, often remain undocumented, and lack sufficient protections as usage scales.
Without ongoing discovery, classification, and governance of APIs, enterprises leave critical AI workloads exposed to danger. Visibility into every API endpoint, especially those responsible for connecting AI systems to external applications, must become a priority.
AI security as a regulatory mandate
Governments across Asia are sharpening their focus on AI governance, making it a regulatory priority and business obligation. National initiatives such as Singapore’s AI Verify framework and Australia’s Digital Platform Regulators Forum are clear signals that safeguarding AI must go hand-in-hand with responsible deployment.

Regulators are building expectations around secure-by-design AI systems, transparent model behaviour, and data accountability. Enterprises that hesitate to act will risk not only falling behind on innovation but also jeopardising their compliance posture, potentially facing significant legal and reputational repercussions.
- Reuben Koh, Director of Security Technology & Strategy, Akamai Technologies APJ
Today, AI security is already being elevated to become a board-level concern and a compliance imperative. Forward-looking organisations are aligning their security strategies with regulatory frameworks, embedding risk management, audit readiness, and ethical oversight directly into AI development and deployment processes.
Steps you can take to secure your AI workloads
For businesses and security leaders, the first step is clarity. Where are AI models deployed - internally, externally, or via open-source? How are they queried, assessed, and governed?
Once mapped, enterprises can consider using these five recommended steps to start protecting their AI deployments:
- Map your AI footprint: Catalogue all in-house, open-source, and third-party models, and document their interactions with enterprise systems.
- Enable continuous API discovery: Implement tools that can automatically detect, classify, and monitor APIs, especially those linked to AI workflows.
- Enforce context-aware access: Apply zero-trust principles to AI, ensuring least-privilege access tailored to user roles and risk profiles.
- Integrate governance into AI development: Security, compliance, and engineering teams should jointly embed controls from model training to deployment.
- Design for observability: Build telemetry and traceability into AI pipelines. Understanding how models behave and make decisions will become important. For example, monitoring input prompts and output content in LLMs will be beneficial for compliance and security incident response.
As these practices take hold, enterprises are also rethinking their security architecture. Interest is rising in AI-native security frameworks, with firewalls and analytics platforms that can detect prompt injection, adversarial input, scraping attempts, and behavioural deviations in real time.
These tools reflect a broader security evolution: from static, rules-based controls to dynamic, intelligent systems designed for AI-powered environments.
AI is reshaping how businesses compete, innovate, and grow. But without securing its foundation, that potential remains limited and at-risk. The evolving AI threat landscape demands new frameworks, new thinking, and cross-functional collaboration.
Enterprises that recognise AI security as a strategic imperative will be best equipped to scale innovation responsibly. The future lies in building proactive, adaptive, and context-aware security systems that can evolve in step with the AI they are meant to protect.
Reuben Koh is Director of Security Technology & Strategy, Akamai Technologies APJ