The ROI for AI needs to be redefined, says Hitachi Vantara’s CTO

The ROI for AI needs to be redefined, says Hitachi Vantara’s CTO

AI success in an enterprise requires fail fast and scale smart approach.

By on

For enterprise AI to succeed, leaders need to rethink what success even looks like. The struggle to see a return on investment (ROI) is because enterprises evaluate success through traditional, outdated lenses.

Traditional ROI metrics, including headcount reduction or cost avoidance, no longer apply to modern AI, especially generative and agentic AI. Instead, ROI should reflect learning speed as to how quickly an organisation can test, fail, adapt, and build the infrastructure needed to scale.

“Success comes from the failures,” Hitachi Vantara’s global CTO for AI, Jason Hardy, told iTnews Asia. “We’ve had a lot of customers who have done six, seven, eight POCs before one can land, and that’s okay. That’s what should be expected.”

The ratio for success is currently lower as enterprises are still in the early experimental stages of AI adoption and are exploring what’s possible, said Hardy.

“This trial-and-error phase means many projects won’t reach full production. In fact, up to 80 to 90 percent of AI projects fail to make it into production. However, the 10 to 20 percent that do succeed can deliver tremendous value, sometimes enough to justify the entire effort,” he explained.

Citing an example, Hardy said a manufacturing client working with him set out to modernise production using AI but hit an early roadblock. The client discovered the existing infrastructure and data landscape were not able to support the added demands of AI workloads.

Hitting pause on AI can be a strategic move

According to Hardy, the company’s existing infrastructure was already operating at full capacity for daily operations, and introducing AI workloads exceeded its limits.

Recognising limitations, the organisation made the difficult but necessary decision to suspend AI efforts temporarily, said Hardy.

Instead of forging ahead and risking failure, the company shifted focus to address core infrastructure and data issues.

He said the team decided to course-correct via re-architecting systems, optimising data pipelines, and ensuring the environment could support both production and AI workloads.

Once the foundations were in place, the company resumed AI efforts.

The result was not just stability, but measurable ROI through smarter, AI-enhanced manufacturing processes, said Hardy.

Maintain a pipeline of ideas, not bet on one AI initiative

AI pilots often stumble after launch, and IT leaders need to look at early warning signs.

“One of the earliest and most telling warning signs is poor output quality, and it usually signals underlying data issues - gaps in quality, context, or completeness,” Hardy said.

“It becomes essential to revisit the underlying data and ensure it’s clean, contextual, and appropriate for the problem at hand.”

Another key warning sign is user disengagement.

Hardy added if users aren't adopting the tool, it could be due to low trust in the system, irrelevant outputs, or a poor fit with daily workflows.

Sometimes, the user interface may not be intuitive, or the infrastructure might introduce latency, making the experience too slow or frustrating.

Hardy said these shouldn’t be seen as signs of failure, but feedback loops.

Metrics including usage rates, task completion times, or time to first token in generative AI models can offer early insights into what’s working and what’s not, said Hardy.

The right leadership move isn’t to push harder on a failing initiative; it’s to pause, reassess, and shift resources.

Enterprises need to treat AI like a product backlog, not a moonshot. You should have multiple projects in flight and see which ones gain traction.

- Jason Hardy, CTO for AI, Hitachi Vantara

Readiness is not just technical, it’s also cultural

Despite the hype, AI is not a plug-and-play solution. Its success depends as much on people and processes as on models and infrastructure.

According to Hardy, AI’s success can’t rest on IT's shoulders only, and CIOs need to take a collaborative, data-first approach.

Legal, HR, business units, and technology teams all need to be involved from the beginning.

Second, even the most advanced AI is only as effective as the data behind it.

CIOs must ensure that data is accurate, structured, and aligned with the business objective, said Hardy.

Hardy notes that data doesn’t need to be perfect, and organisations shouldn't aim to overhaul data infrastructure upfront.

Instead, organisations should focus on the specific data needed for the AI use case, like transcripts and call recordings, for improving customer service.

Finally, AI success also requires the right expertise.

“Whether through internal teams or external partners, skills in data engineering, model development, and process integration are essential.

Start with smaller and manageable projects at the "edge" of the business,” Hardy said.

This allows teams to learn, fail safely, refine processes, and avoid major disruptions.

Hardy added that in complex enterprise environments, long-term value from AI is being sustained through a careful balance of infrastructure decisions that prioritise cost-efficiency, data locality, and compliance.

Running AI at scale solely in the cloud is cost-prohibitive

One trend that Hardy highlights is the growing realisation that operating AI workloads, at scale, solely in the cloud is becoming cost-prohibitive.

Clouds do offer flexibility and scalability; however, the total cost of ownership (TCO) over time, especially when inferencing workloads ramp up, is often much higher compared to on-premises infrastructure.

Hardy said AI inferencing, particularly in generative and agentic contexts, introduces unpredictable costs due to usage-based billing models, including per-token charges.

As AI adoption grows, inferencing shifts from controlled back-office tasks to user-driven activities, making cloud costs unpredictable.

Hardy advises organisations can look at on-premises infrastructure which offers fixed costs and control and is critical for large-scale deployment.

Additionally, data proximity is key, as latency slows down AI pipelines when compute and data are separated.

He said Hitachi Vantara was able to address this by co-locating GPUs, storage, and vector databases for faster throughput.

 Data sovereignty further drives the shift.

With increasing regulations and geopolitical concerns, Hardy said enterprises now favour sovereign or hybrid AI models to retain local control over sensitive data.

“Consider hybrid as the most resilient strategy that combines cloud scalability with on-prem control for cost, performance, and compliance.

To reach the editorial team on your feedback, story ideas and pitches, contact them here.
© iTnews Asia
Tags:

Most Read Articles