AI governance must evolve alongside adoption in APAC

AI governance must evolve alongside adoption in APAC
Image Credit: TeamViewer

Companies must assess how AI systems align with data residency requirements and existing governance frameworks.

By on

The Asia Pacific region is emerging as one of the fastest adopters of generative AI, with employees embracing new tools at a pace that often exceeds organisational readiness. A recent Boston Consulting Group (BCG) report highlights the scale of this shift. Adoption rates vary across markets: India leads at 92 percent, while Japan trails at 51 percent, but overall usage is high, with 78 percent of respondents in APAC using AI on a weekly basis, compared to their worldwide counterparts. Frontline workers, in particular, are engaging actively, outpacing their global counterparts.

At the same time, however, governance frameworks are still catching up. A significant proportion of employees are reporting that they are using generative AI tools without formal approval, and some indicate a willingness to bypass restrictions. Yet just over half say their workflows have been formally redesigned to incorporate AI. This suggests that while AI is already embedded in day-to-day work, it is not always being implemented in a structured or visible way.

For organisations, this creates both risk and an opportunity to establish clearer foundations.

The need to move from experimentation to structured governance

As companies move beyond early experimentation, governance is becoming a central consideration rather than a secondary one.

Speaking with iTNews Asia, Mei Dent, Chief Product & Technology Officer at TeamViewer, emphasises that AI-related data should be managed with the same level of care as any other sensitive information. Within TeamViewer, this includes operating as a data processor under GDPR principles, where customers retain ownership and control of their data, while the company ensures appropriate safeguards are in place.

To meet customer and regulatory requirements, TeamViewer works with cloud providers such as Microsoft and Google to align processing locations with data residency regulations. They also apply encryption and anonymisation to protect personally identifiable information, and the company continues to pursue certifications that reflect local compliance standards.

These measures reflect a broader principle: AI systems should be integrated into existing data governance frameworks rather than treated as exceptions.

Evaluating evolving workflows and interfaces

Many organisations are currently focused on enhancing existing workflows with AI, such as its IT support teams improving ticket triage, matching solutions more efficiently, or summarising information.

Over time, however, there is likely to be a broader rethinking of how work is structured, including the potential to reduce reliance on traditional artifacts like tickets.

As AI capabilities expand, there is growing emphasis on ensuring that outputs can be validated and understood.

- Mei Dent, Chief Product & Technology Officer at TeamViewer

For instance, this approach is integrated into the enhancement of existing workflows at Teamviewer through AI. She outlines the company’s methodology, wherein outputs produced by AI are systematically verified. In certain scenarios, one AI system generates code or recommendations, and a separate AI system independently validates those results.

Another example is the integration of AI to optimise workflows for improved productivity. However, maintaining a 'human in the loop' remains essential for ensuring the effectiveness of these enhancements. “Organisations should implement measures such as activity logs, review checkpoints, and mechanisms to suspend or reverse AI-driven actions when appropriate,” she states.

This evolution is also expected to influence user interfaces. Dent posits that systems that were originally designed for human input may increasingly accommodate AI-driven interactions, which in turn requires greater transparency, clear validation steps, and mechanisms for human oversight. Features such as audit trails and control points become essential in maintaining trust in these environments.

Rather than relying on a single provider, many organisations are adopting a flexible approach to AI models, says Dent.

At TeamViewer, this includes working across platforms such as Google Cloud Platform, Microsoft Azure and OpenAI, selecting models based on performance and cost considerations. Tools like Claude are also being evaluated for specific use cases, particularly in areas like code-related tasks.

AI adoption is not limited to customer-facing products. For instance, design teams are exploring AI capabilities within Figma, while engineering teams are applying AI to code generation, testing, and review.

As usage expands, organisations are placing more focus on evaluating and standardising these tools to ensure consistency and manage risk effectively.

Equally important is the ability to demonstrate value to senior management. In-product dashboards, for example, can help visualise what changes were made, how long tasks took, and how AI-assisted workflows compare to manual

processes. This kind of visibility supports both accountability and more informed decision-making. “It’s about proving business impact,” she explains.

Workforce implications and the need for support

The rapid adoption of AI is accompanied by a mix of optimism and concern among employees.

BCG data indicates that more than half of frontline workers in APAC are concerned about potential job displacement, even as overall sentiment toward AI remains positive. This highlights the importance of communication and support, particularly for those most directly affected by automation.

Dent notes that while some roles may evolve or diminish, particularly in areas such as business process outsourcing and low-level admin roles, AI also creates opportunities to shift human effort toward more complex and strategic tasks. However, this transition requires deliberate investment in reskilling and upskilling.

Organisations are beginning to adjust their talent strategies by combining new graduates with experience in AI tools alongside seasoned professionals. At the same time, there is recognition that not all employees will transition easily, underscoring the need for broader organisational and governmental support.

Moving forward with structure and adaptability

As AI adoption continues to accelerate, Dent encourages organisations to take a more structured and thoughtful approach.

This includes carefully assessing how AI systems align with data residency requirements and existing governance frameworks, whether deployed on-premises, in the cloud, or in hybrid environments. It also involves defining clear transparency and validation mechanisms within AI-driven workflows, supported by documentation that captures both the steps taken and the resulting business impact.

At the same time, companies benefit from ongoing evaluation of internal AI tools, allowing them to move toward greater standardisation while still leaving room for experimentation. Providing customers with clear options around how their data is used, along with access to remediation processes, can further strengthen trust.

Finally, investing in workforce readiness remains essential. Developing structured upskilling strategies, particularly for roles most affected by AI, can help ensure that adoption is both sustainable and inclusive.

In this context, governance is not simply about risk mitigation, says Dent. It plays an important role in enabling organisations to adopt AI in a way that is consistent, transparent, and aligned with long-term business value.

To reach the editorial team on your feedback, story ideas and pitches, contact them here.
© iTnews Asia
Tags:

Most Read Articles