iTnews Asia
  • Home
  • News
  • Data and Analytics

Why a ‘two-speed’ AI strategy can help your enterprise achieve ROI goals

Why a ‘two-speed’ AI strategy can help your enterprise achieve ROI goals

The success of AI depends on strong data foundations, workflow integration and measurable KPIs over “moonshot” pilots.

By Abbinaya Kuzhanthaivel on May 15, 2026 3:34PM

Artificial intelligence investments continue to accelerate across enterprises, but many organisations are still struggling to convert pilots into measurable business returns. While proof-of-concept projects often generate excitement internally, scaling them into production environments has proven far more difficult.

Speaking with iTnews Asia, Srikant Gokulnatha, senior vice president of AI and Analytics at Oracle shares his perspective on why some AI initiatives succeed while others remain trapped in proof-of-concept mode, the warning signs organisations often miss, and why a well-developed data architecture can be the most important long-term AI investment.

Gokulnatha said organisations that succeed with AI initiatives usually begin with a clearly identified business problem rather than technology experimentation.

Successful AI adoption typically shares five characteristics: executive sponsorship, clearly defined KPIs, data readiness, allocated budgets, and a strong sense of urgency.

- Srikant Gokulnatha, senior vice president of AI and Analytics, Oracle. 

“The projects that work are not experiments. They are tied to a business problem with measurable outcomes that a stakeholder genuinely cares about,” he added.

He noted that many stalled AI initiatives lack internal ownership and are often pursued simply because companies feel pressure to “do something with AI.”

Why successful pilots often fail at scale

While many companies report positive outcomes during pilot stages, Gokulnatha said the real challenge emerges when organisations attempt to operationalise AI across larger environments.

“A pilot can work well in a standalone fashion. But eventually people have to use it as part of their everyday workflows,” he explained.

He argued that workflow integration is often underestimated during early-stage AI deployments and organisations frequently treat integration as an afterthought instead of designing for it from the beginning.

He added that organisational coordination becomes critical at scale because the teams building AI systems are often different from the teams managing operational applications and infrastructure.Data remains the biggest bottleneck

Despite rapid advances in large language models (LLMs), Gokulnatha believes the biggest reason AI projects stall is still data-related. One of the clearest warning signs for unsuccessful AI initiatives is also prolonged delays in preparing data. “A lot of AI initiatives stall because organisations don’t have the right data architecture in place,” he said.

He explained that while packaged enterprise applications often already contain structured workflows and accessible datasets, more ambitious AI projects require enterprises to combine structured, unstructured and external data sources, significantly increasing complexity.

Beyond clean data, more advanced AI use cases increasingly require semantic layers, ontologies, and contextual mapping that allow AI systems to understand business definitions and relationships.

“When somebody says ‘top-performing products,’ do they mean by revenue, by margin, or by volume?” he said. “Those constructs don’t exist naturally in the data.”

The importance of a ‘two-speed’ AI strategy

To balance short-term pressure for results with long-term transformation goals, Gokulnatha said leading organisations are increasingly adopting what he described as a “two-speed” AI strategy.

“One track focuses on delivering quick wins through simpler AI projects. The second track focuses on building the long-term data and platform capabilities needed for more sophisticated AI initiatives,” he explained.

He argued that early operational wins are important because they build internal confidence and create momentum for larger investments. “When organisations demonstrate ROI from smaller projects, they gain the credibility to fund larger-scale AI transformation initiatives,” he said.

Examples of successful near-term projects include process automation, customer service copilots and financial close agents that identify anomalies and exceptions during monthly reporting cycles. “These projects work because the business problem is clearly defined, the data already exists and the outcome is measurable,” he said.

In contrast, ambitious AI projects involving fragmented enterprise data environments often struggle because companies underestimate the complexity of building the required data infrastructure.

ROI should be measured through business outcomes

For boards, CFOs, and executive teams, Gokulnatha stressed the importance of evaluating AI investments against concrete business outcomes rather than technical milestones. He pointed to metrics such as margin improvement, cost reduction, revenue growth, customer experience and risk reduction as stronger indicators of AI value creation.

In sectors like construction, for example, he said AI projects focused on procurement optimisation and schedule-risk reduction are delivering measurable operational benefits.

“Cost overruns often happen because project schedules slip. Reducing those risks creates very tangible business value,” he added.

Despite concerns around long-term AI investment cycles, he said enterprises should expect measurable results quickly. “Our pilots are typically four weeks or less. Within that time, we’re able to quantify the benefits and help customers determine whether to move into production.”

Once deployed at scale, he said organisations should begin seeing meaningful business value “within months and not years.”

Long-term investments will benefit from a strong data foundation

As AI models continue to evolve rapidly, Gokulnatha believes that enterprise investments in data foundations will remain relevant regardless of model changes.

“The large language models are trained on public data. What enterprises uniquely have is private data,” he added.

According to Gokulnatha, the long-term competitive advantage for enterprises will depend on how effectively they organise and contextualise their proprietary information across applications, documents, IoT streams, and historical systems.

He added that semantic layers and business context graphs will become increasingly important as enterprises move toward more sophisticated AI reasoning systems.

To reach the editorial team on your feedback, story ideas and pitches, contact them here.
© iTnews Asia
Tags:
ai data and analytics

Related Articles

  • Start small or risk stalling: Why data streaming strategies are failing
  • AI readiness could be APAC’s biggest challenge despite rapid adoption
  • Is AI exposing more vulnerabilities in our security foundations?
  • Why AI code quality is the next critical enterprise risk
Share on Twitter Share on Facebook Share on LinkedIn Share on Whatsapp Email A Friend

Most Read Articles

Start small or risk stalling: Why data streaming strategies are failing

Start small or risk stalling: Why data streaming strategies are failing

Is AI exposing more vulnerabilities in our security foundations?

Is AI exposing more vulnerabilities in our security foundations?

DBS Bank leverages data to raise operational efficiency and customer engagement

DBS Bank leverages data to raise operational efficiency and customer engagement

Traveloka scales its recommendations with Amazon Personalize

Traveloka scales its recommendations with Amazon Personalize

All rights reserved. This material may not be published, broadcast, rewritten or redistributed in any form without prior authorisation.
Your use of this website constitutes acceptance of Lighthouse Independent Media's Privacy Policy and Terms & Conditions.