As (AI) artificial intelligence moves from experimentation to enterprise-wide deployment, many organisations are discovering a critical dependency: AI cannot operate at scale without an intelligent, adaptive, and secure network foundation.
While investment in models, data pipelines, and GPUs continues to surge, the networking layer, long treated as a passive transport mechanism has emerged as a decisive factor determining whether AI succeeds or stalls.
In a recent conversation with iTNews Asia, Mark Ablett, Vice President for Asia Pacific and Japan at HPE Networking, explains why AI-native networking is becoming essential infrastructure for the AI-driven enterprise.
“AI workloads demand real-time responsiveness and adaptability across distributed environments. If the network can’t match that level of intelligence, it becomes the bottleneck to the entire AI strategy,” Ablett said.
And for many organisations, this limitation is rooted in outdated networking architectures not designed for the dynamic, high-throughput demands of modern AI.
AI-Native networking: A foundational shift
Ablett stresses that the rise of AI-native networking represents more than an incremental improvement, a full paradigm shift. “An AI-native network is fundamentally different because it's conceived and built with AI and for AI at its core, not added as an afterthought or bolt-on feature,” he said.
Traditional networks rely heavily on manual configuration and operate reactively, requiring human intervention when issues occur. Even software-defined networks, while more flexible, still depend on administrators to set rules and respond to disruptions. In contrast, AI-native networks continuously learn, adapt, and improve autonomously.
“These intelligent systems dynamically adapt and scale to meet evolving demands, resolving issues proactively without constant human oversight,” Ablett noted. This intelligence enables organisations to eliminate manual bottlenecks, streamline operations, and support high-performance environments optimised for mission-critical AI workloads.
Why AI pilots fail to scale
Many AI initiatives demonstrate promising results in controlled pilots but struggle to transition into production environments. Ablett attributes this breakdown primarily to inadequate network infrastructure.
“AI pilots succeed in controlled environments with limited users and well-structured datasets. Production is different… That’s where traditional networks start to break down,” he explained.
As deployments expand, enterprises encounter bandwidth limitations, latency spikes, and inconsistent performance across hybrid environments. “These challenges are not peripheral, but they directly affect model performance and user experience.” Ablett said.
He further said enterprises often have a misunderstanding of AI’s networking demands. Many organisations view AI as a software or data problem rather than an infrastructure challenge. “The network feels invisible when it works, until it doesn't.”
Limitations
The limitations of traditional networks become particularly evident in distributed AI training scenarios. Ablett provided a clear illustration: “On a traditional network, congestion may take 30 seconds to detect and even longer to manually reroute, leaving expensive GPUs idle and training stalled.”
In sectors like healthcare, these delays can have real human consequences. Diagnostic AI processing medical images cannot tolerate even milliseconds of unanticipated latency. “Even a two-second delay can affect clinical outcomes. An intelligent network recognises the criticality and prioritises traffic accordingly.”
The irony, Ablett notes, is that enterprises invest heavily in models and talent while underinvesting in the next-gen network fabric required to support them.

Ultimately, the limitation isn’t bandwidth, but the absence of intelligence to adapt at the pace AI requires.
- Mark Ablett, Vice President for Asia Pacific and Japan at HPE Networking
As AI expands across hybrid and multicloud environments, security risks are escalating particularly when organisations attempt to retrofit AI onto legacy architectures. “Legacy architectures often create silos that AI can unintentionally exploit,” Ablett warned.
He emphasised that integrated security, not as third-party bolt-ons, is essential for protecting devices, data, and users in an always-connected world.
Ablett also identified several warning signs that indicate a network environment incapable of supporting enterprise-wide AI. They include unpredictable latency spikes, data bottlenecks between edge and cloud, fragmented visibility across network domains and heavy reliance on manual troubleshooting.
“These are all symptoms of networks that are still operating manually rather than intelligently,” he explained. AI-native networks can address these challenges through real-time telemetry, predictive analytics, and adaptive automation.
The road ahead
Looking ahead, Ablett believes the most critical network capabilities over the next three to five years will revolve around agility, built-in security, and AI-driven automation. Networks must become self-learning, predictive, and resilient as workloads and data move fluidly across environments.
“AI-native operations will indeed become the new backbone of digital transformation… an embedded intelligence layer that continuously optimises performance and enables enterprises to innovate faster,” he said.
As organisations accelerate their AI strategies, the message is clear: scalable AI is no longer limited by compute or algorithms, but by the intelligence of the network supporting it. “The question isn’t whether to invest in AI, but whether your infrastructure will allow that investment to deliver its full value.”




