AI Agents are now driving a quantum shift in software development

AI Agents are now driving a quantum shift in software development

To thrive, APAC enterprises must help developers pivot.

By on

Businesses in the Asia-Pacific (APAC) region are investing heavily in agentic AI to stay ahead. IDC reports that 70 percent of APAC businesses expect agentic AI to disrupt business models in the next 18 months. As of 2025, two out of five already use AI agents, and over one in two expect to implement them by 2026.

Use of AI Agents presents huge opportunities but also comes with risks, given their highly autonomous nature. Each data source, static AI model, and agent inside or outside an organisation converges to create another point of failure developers need to secure and monitor, which is becoming a board-level concern.

Recent research from Lenovo highlighted that only 48 percent of IT leaders felt confident in their ability to manage AI development and implementation risks, with more than six out of 10 agreeing that AI agents pose a new kind of insider threat they are not fully prepared to face.

What key considerations should organisations bear in mind as they aim to increase the use of agentic innovation throughout their software development lifecycle?

As agents rise, so do the risks, and they go beyond security and data

The rise of AI Agents has completely upended the way software is built, governed, and managed, introducing new risks. IDC estimates that a third of APAC organisations are concerned about security and data privacy vulnerabilities associated with AI agents. However, there is more to the risks than security and data privacy.

Operational and development risks are the most immediate and difficult to contain.

When an AI agent is compromised, the impact can spread malicious activity through a web of interconnected systems. Securing these countless, growing risks adds friction to the development process.

- Sunny Rao, Senior Vice President of APAC, JFrog

Score common vulnerabilities and exposures (CVEs) too leniently, and a threat can slip through. Set the threshold too high, and developers will become overwhelmed with false positives, consuming time, resources, and reducing the capability and capacity to respond to real incidents quickly.

Supply chain risks add to the developer burden. Many agentic systems are built with open-source software, pretrained models, and countless preset integrations to empower faster development.

However, all it takes is a single poisoned model or one package seeded with malware to expose organisations and individuals in the software supply chain (SSC) to attacks. Even a leaked token in a public repository has the potential to trigger failures that cascade well beyond its origin. The deeper the interconnections, the more destabilising one weak link becomes.

The governance and compliance risks are also immense today. Agentic systems bring risks unique to their autonomy, like black-box decision-making that hinders explainability, unsafe or subversive behaviours that can bypass human intent, and bias embedded in training data that scales into unfair outcomes.

Shadow AI/ML agents running unsanctioned within organisations also amplify these dangers, operating outside oversight and leaving no audit trail.

Agents are driving a quantum shift in software security and delivery, and the workload is immense

Full traceability, down to the binary level or the weights inside a machine learning model, is now expected by stakeholders. Policymakers especially recognise these risks and are pushing for stricter and more comprehensive legislation. India’s lawmakers, for example, are pushing for AI bills of materials to be made mandatory.

This means enterprises across APAC need to, on demand, prove what their agents did, why they acted, and whether their outputs comply with evolving regulations. The requirement for security everywhere is a quantum shift in the way developers typically operate and adds a massive compliance burden on all teams across the software development lifecycle.

In this quantum shift, the focus is no longer on how fast companies can bring AI agents to market. The real question lies in whether enterprises can ensure that every component, from models to binaries and packages, is secure, explainable, and compliant in real time.

How enterprises can sustainably address new risks in the agentic software lifecycle

Developers are now expected to be all-in-one compliance officers, AI governors, and security sentinels. They are already stretched thin, and throwing more tools at them only creates even more silos and blind spots for them to track.

Enterprises need to take a different approach to sustainably address these risks, while building in trust by design. Here is how they can do so:

1. Create a trusted AI agent system of record

Treat agents as first-class citizens in the SSC. Track every asset, from code and configs to prompts and credentials. Maintain cryptographic audit trails, attach contextual metadata, and enable safe onboarding and retirement. This delivers a single, trusted audit trail for regulators and partners while also accelerating agentic innovation

2. Take a hybrid human–agent developer approach

Manual oversight alone cannot sustain compliance. Developers should focus on architecture, governance, and intent, while agents co-create through coding, testing, packaging, and monitoring. Automating the remediation of vulnerabilities with evidence capture is one immediate way to free developers to innovate securely.

3. Nurture the Agentic Engineer

A new persona is emerging, which blends the skills of a coder, a machine learning practitioner, and a compliance architect into one. Agentic engineers design delivery systems that anticipate risk, embedding governance into workflows, and orchestrate interactions between human developers and autonomous agents. They monitor agent behaviour, enforce policies in real time, and translate regulatory requirements into actionable guardrails inside the SSC.

If enterprises invest in elevating their developer teams with skills that match this new persona, they will gain leaders who can drive agentic innovation without sacrificing security, explainability, or compliance.

The path forward for agents amid the quantum shift

The quantum shifts in how we develop software is happening, whether organisations are ready or not. Much like the rise of open source demanded a secure SSC, the rise of agentic AI demands a better approach to audit and trust infrastructure.

APAC organisations that embrace such a unified approach not only mitigate risks but prime their teams for accelerated innovation using AI agents and/or whatever comes next in the evolution of software development best practices.

 

Sunny Rao is Senior Vice President of APAC, JFrog

To reach the editorial team on your feedback, story ideas and pitches, contact them here.
© iTnews Asia
Tags:

Most Read Articles