As generative AI moves from pilot to production, enterprises struggle to scale implementations in a governed and sustainable way. Scaling generative AI across the enterprise requires groundwork, including infrastructure, quality training data, integration frameworks, and outcome-driven goals.
Etiqa Insurance Singapore, part of regional financial group Maybank, has moved beyond early pilots to build context-aware, embedded AI systems that extend across customer and employee workflows.
“Over the past year and a half, Etiqa’s AI strategy has evolved, shifting from traditional machine learning and intent-based systems towards generative AI systems designed for real-time, natural language engagement,” Etiqa’s chief technology officer, Dennis Liu, told iTnews Asia.
This includes two generative AI chatbots planned for 2025.
A customer-facing chatbot designed to improve query resolution rates and reduce the need for human handoffs, providing customers with faster, comprehensive self-service options, said Liu.
The second is an internal GenAI assistant designed to empower sales advisors, enhance product training, and streamline needs analysis to enhance staff productivity and equip teams with richer insights, he added.
Autonomously resolved 44 percent of queries
This shift marks a deliberate departure from earlier RASA-based bots, which were limited to predefined conversation flows and basic intent handling.
The company has trained generative AI on proprietary knowledge bases and historical customer interactions, enabling contextually aware, personalised conversations.
These advancements are built on earlier operational wins that helped validate AI’s potential while exposing the limits of task-specific tools.
Liu said Etiqa’s E-CLEVA video-assisted claims service reduced vehicle damage assessment and approval time by up to 80 percent, through real-time interaction between claimants and surveyors.
The company automated travel delay claims using AI-powered straight-through processing (STP) and integrations with PayNow and MyInfo, enabling instant approvals.

Etiqa’s AI-powered chatbots have recorded a 1,233 percent increase in monthly usage, from over 300 to 4,000 threads last year. The chatbot autonomously resolved 44 percent of queries, easing agent workload and enhancing service efficiency.
- Dennis Liu, Chief Technology Officer, Etiqa International Holdings
Liu said the team is piloting the use of Vision LLMs to automate claims document reviews, with projected reductions in processing time of up to 50 percent.
On the customer service front, the next generation GenAI chatbot is projected to increase autonomous resolution rates by over 40 percent, he added.
This will reduce handovers to live agents, freeing up customer service teams to focus on complex queries.
The RASA-based chatbot implementation laid a solid foundation for the company, reducing live chat escalations and building the operational and data infrastructure necessary for advanced AI adoption.
Gradual transition to GenAI-powered chatbots
The company’s current chatbots, built on the RASA framework, are trained using structured intents and predefined conversation scenarios.
This allows for automation of common customer queries, including policy servicing, claims status checks and payment reminders.
Liu says the bots handle basic interactions effectively but follow fixed conversational flows with limited flexibility, making it challenging to address complex or unexpected queries without human intervention.
He added that the company tracks indicators to evaluate chatbot performance.
The Transfer-to-agent rate, which reflects how often the chatbot is unable to resolve a query independently
Additionally, the resolution rate, showing the percentage of queries successfully handled without human intervention
The team uses sentiment analysis to gauge customer emotion and satisfaction during interactions, and considers the customer satisfaction index, providing insight into overall customer satisfaction with the service experience.
Liu said, “Looking ahead, the company’s gradual transition to GenAI-powered chatbots represents an upgrade in capability. These systems will be fine-tuned on Etiqa’s proprietary knowledge base and historical customer interactions, enabling natural, contextually aware and personalised conversations.”
The company is expanding performance and success metrics to include resolution time, to assess how quickly queries are handled and customer feedback, collected post-interaction, to improve conversational quality, Liu added.
It also includes escalation reduction, measuring how effectively the chatbot handled complex queries before needing human support.
Faces mindset, legacy and talent challenges
Etiqa tackled three challenges during AI transformation, including cultural shift, legacy systems, and talent readiness.
According to Liu, the first hurdle was adopting a digital-first mindset across the organisation.
To drive acceptance, the company built a culture that encourages experimentation and flexibility.
Teams embraced a “think big, start smart, fail fast, learn fast” approach, which supported learning and early adoption of AI initiatives, said Liu.
The second challenge was legacy systems tightly linked to core operations.
Instead of overhauling infrastructure, Etiqa progressively replaced outdated processes.
This has helped integrate AI capabilities without disrupting the business.
Liu mentioned that the third was a shortage of AI-ready talent.
Etiqa responded by investing in its tech workforce, which includes training internal talent, onboarding high-potential graduates, and hiring data scientists and AI ops engineers.
As AI deployments expanded, Etiqa also introduced governance frameworks to ensure compliance, scalability, and long-term sustainability.