Artificial intelligence in the enterprise is quietly shifting from experimentation to expectation.
After years of pilots, proofs of concept and isolated automation projects, many organisations are discovering that treating AI as a standalone tool delivers limited value. Instead, the next phase of adoption is seeing AI embedded directly into the way businesses operate, shaping decisions, workflows and governance structures in the background.
In conversation with iTNews Asia, Kelvin Cheema, Global Chief Information Officer and Managing Director, Global Transformation & Change, Acuity Analytics, shared insights on how enterprises are moving towards AI as “invisible infrastructure” rather than a bolt-on technology.
“AI stops being a point solution when we move from isolated pilots to systematic capability that reshapes how decisions are made and how work gets done,” he said.
Instead of confining AI to analytics sandboxes or side experiments, it must influence core processes such as financial close, procurement, forecasting and risk management.
When models begin influencing everyday planning cycles and decision rights, AI becomes inseparable from the operating model itself.
“It becomes more like the enterprise as code, where processes, decisions and governance are structured, testable and adaptive. That’s when AI becomes invisible infrastructure,” he explained.
Why AI projects fail
Many organisations begin their AI journey by hunting for use cases, but Cheema believes this approach often leads to fragmented deployments that sit around the edges of the business.
“Organisations should first understand their end-to-end processes and redesign workflows before introducing automation,” he added.
Despite widespread experimentation, Cheema noted that true scale remains rare. He stressed that most AI failures are organisational rather than technical. “AI pilots often stall because there’s no clear business ownership, no defined value metrics, and they’re executed in functional silos with disconnected data.”
Treating AI as an IT initiative rather than a transformation effort is another common mistake. He estimates that fewer than five percent of organisations today are “truly AI future-ready”, meaning they have scaled AI into measurable business outcomes.
To move beyond pilots without creating change fatigue, Cheema advocates a platform-based approach rather than one-off projects. This includes staged adoption, embedded governance and strong feedback loops.

AI must be auditable, explainable and integrated into daily decision rights. Culture and leadership inertia are often bigger barriers than technology itself.
- Kelvin Cheema, Global Chief Information Officer and Managing Director, Global Transformation & Change, Acuity Analytics
Building AI into the backbone
A recurring theme in Cheema’s strategy is the importance of integration. Disconnected systems and fragmented data architectures, he warned, can undermine even the most advanced models.
“Layering AI on fragmented data leads to biased outputs, slow feedback loops and scale stagnation. Integration beats raw intelligence,” he said.
Within his own organisation, Acuity Analytics, Cheema said the company consolidated systems into an integrated cloud-based enterprise stack. For example, they are invested in Oracle Fusion ecosystem for ERP, HCM and performance management functions, alongside a unified data warehouse and analytics capabilities, to create a consistent and governed data foundation.
Cheema said the shift has been from AI as a feature to AI as a foundational capability. “We’re treating AI as the backbone of how we operate, not something we add later,” he said.
When it comes to measuring success, Cheema believes organisations often focus too narrowly on adoption rates or cost reductions, but the emphasis should be on business impact.
Improvements in forecast accuracy, faster access to insights, shorter process cycle times and more predictable outcomes are used to gauge whether decision quality has actually improved. Client satisfaction, revenue contribution and governance maturity, including explainability and audit trails, are also considered important indicators.
The next phase
Looking ahead, Cheema expects enterprise operating models to evolve rapidly over the next three to five years. Ultimately, competitive advantage will depend less on who has the most advanced algorithms and more on how well organisations design their operating models.
“The difference between winners and laggards won’t be the technology itself. It will be the operating model, governance maturity and how well humans and AI collaborate,” he added.
For many enterprises, that shift signals the end of AI as a project and the beginning of AI as infrastructure.




