As AI adoption rapidly rises among Singapore businesses, especially SMEs, the conversation is moving from whether companies should implement AI to whether their employees are equipped to use it effectively.
Ongoing national discussions about SkillsFuture Level-Up, AI governance, and boosting SME productivity reflect a key concern: although AI tools are widely available, there remains a noticeable gap in AI proficiency, particularly among mid-career professionals and those in non-technical roles.
To help address this gap, Tribe Academy, an institute focused on practical learning for professionals, has identified an increasing divide between access to AI and effective use of AI in the workplace. Many organisations have implemented AI solutions, yet their teams may lack the hands-on skills needed to weave these tools into everyday tasks - leading to risks such as stalled productivity, unsatisfactory ROI, and heightened workforce anxiety.
iTNews Asia spoke to Felicia Tan, Director of Tribe Academy, about what Singapore companies must consider to bridge these gaps.
iTNews Asia: Why is AI literacy no longer the hallowed domain of tech roles, but is essential for managers, analysts, and operations teams?
Tan : It doesn't matter whether you use AI. If your competitors do, your clients do, that's already changed what your managers and clients expect from you.
When a client knows that a market analysis or a proposal draft can be turned around in hours rather than days, they're no longer impressed by your two-week timeline. When leadership sees one team producing weekly insights that used to take a month, every other team's pace suddenly looks like a problem. That's the moment AI literacy stops being a tech team concern and becomes essential for managers, analysts, and operations alike.
For managers, the risk is miscalibration. A manager who does not understand what AI can do will either overpromise what their team delivers manually or worse, underdeliver against a competitor who has already embedded it into their workflow.
AI literacy for managers isn’t about being the best prompter but setting the standard for what “good” looks like at speed. Also knowing its affordances as well as its limitations such as which workflows need a human-in-the-loop, where the guardrails are, what actually requires judgement vs. what can be accelerated is now key to a manager’s role in the age of AI.
For analysts, the challenge is that AI can generate insights quickly, but it can also generate very confident nonsense. A critical eye is much needed for the analyst to now be less like report writers but more like auditors. Their value will evolve into one that is discerning enough to check for assumptions, validating data and stress-testing their conclusions. If one can’t do that, AI will just help them to be wrong faster.
AI literacy is arguably the biggest unlock for the operations teams as they own the workflow. While they previously might need to lean on tech teams to help with their requests, it is now readily available at their doorstep for them to work on.
In the past, the biggest lever for productivity might have been sitting largely untouched if the team required technical help to improve a workflow. Not because ops teams didn't see the inefficiencies, but because doing something about it meant raising a ticket to the tech support team, then writing a brief, joining the queue, waiting for the tech team to have bandwidth, and then iterating because the output didn't quite match how the process actually worked on the ground.
At some point, it's probably easier to leave it and work around it. When the inefficiency stays, the workaround becomes the norm, and the real productivity gains never materialise.
This would change with AI. Ops teams no longer need to go through that gauntlet. Today, the ops team can trial and build optimised and highly customised workflows themselves, in natural language, without deep technical knowledge. And crucially, they bring something the tech team doesn't have i.e. actual domain expertise. They know which assumption to change when something breaks, and they can iterate without being blocked by approvals or queues.
AI has changed what we are expected to deliver. When expectations change, everyone who owns decisions, quality, and process needs AI literacy, not just the tech team.
iTNews Asia: How does applied, workflow-based AI training differ from generic tool adoption?
Tan: Applied, workflow-based AI training crystallises the difference between teaching people to use a power tool, and teaching them how to redesign the factory line.
Organisations roll out a shiny AI-powered dashboard for sales or marketing, leadership gets a nice slide to show the board, and it may seemingly look like transformation. It looks good for optics, yet the actual work often doesn’t move faster because the bottleneck is still the process.
Workflow-based training starts from the opposite direction. You start with your actual process, map it end to end, and then ask where AI genuinely earns its place. You define where AI drafts, where it checks, where it escalates, and where the human must own the decision. The real productivity gains happen when internal operational processes that are manual, tedious, and unglamorous are optimised with AI. Of course nobody's posting on LinkedIn about how they automated invoice reconciliation - but these are where real hours are saved.
It is also important to note that the AI tool landscape and the LLMs change ridiculously fast. Mastery of one tool can become outdated in months. But if you train teams to diagnose workflows, identify breakpoints, and plug gaps with AI as needed, they stay adaptable. They also learn the most important skill, that is knowing when AI is not the solution. Sometimes the fix is just a clearer SOP, fewer approvals, or more data points, not another prompt.
In fact, the strongest “AI-ready” people aren’t the ones who can name 10 tools. They’re the experienced ones with domain knowledge who can spot within a workflow where assumptions fail, where the flow breaks, and how to design human oversight so AI actually delivers productivity rather than more drafts and more rework.
iTNews Asia: What are Singapore employers getting wrong about “AI-ready” talent?
Tan: With AI being the buzzword today, it can feel like a badge of honour to claim you’re “AI-ready.” But the term is slippery. Different employers see it differently, and those definitions reveal how each company is actually preparing for the AI shift, and what being truly “AI-ready” talent actually demands.
The first trap many employers fall into is thinking you can helicopter in the talent. Hire a prompt engineer, bring in an AI specialist, and the transformation happens. But the person who knows AI doesn't know your business, and the person who knows your business doesn't yet know AI if he has not been trained yet.
Without deep domain context, a helicopter talent can build impressive demos that don’t survive contact with real workflows. If your goal is to integrate AI into existing workflows, then the most valuable person in the room is the one who understands both. That almost always means someone from within, upskilled deliberately. You can't outsource domain expertise, and without it, you're not building AI into your processes but merely building AI solutions for the sake of it.
The second issue is that the benchmark for “ready” is a moving target, and some employers are treating it like a fixed credential. The LLM landscape is shifting so fast that what was genuinely difficult or impossible weeks ago is now straightforward. Teams that were "AI-ready" by last year's definition may already be behind.
So when employers screen for specific tool familiarity, they're essentially hiring for yesterday's capability. What actually matters is whether someone can learn, adapt, and apply critical thinking to the AI tools, not whether they've already used any particular one.
Thirdly, which would be the most costly, is having employees who are doing their BAU but + AI. If organisations define “AI-ready” as employees doing the same job but with just ChatGPT open in another tab for ad-hoc tasks, that often just creates extra work and not productivity. In some ways this actually creates new problems.
If AI adoption is localised at the individual level with no structural change, outcomes become wildly inconsistent across the team. A few “power users” become the unofficial AI engine of the team, and results look great until they go on leave or resign. Outcomes will vary wildly between individuals, and the capability lives and dies with them. Inevitably, you would have built dependency on individuals without having organisational productivity.
An organisation’s most valuable talent will be people who can take their deep operational knowledge and redesign workflows into repeatable, AI-assisted processes that the whole team understands and can run, by not just the one person who built them. When this happens, it enables the organisation to scale and sustain and build capabilities that scales beyond any single individual.
iTNews Asia: Why must upskilling keep pace with job redesign?
Tan: Upskilling is a necessary, but insufficient job prerequsite. If job redesign doesn’t keep pace, productivity gains plateau, and it can even trigger an “AI productivity paradox.”
When upskilling happens without redesigning the job, we are basically taking someone who is already stretched, teaching them to use AI tools and then sending them back to the same role, with the same KPIs and same volume of work, except now they’re expected to produce faster and better outputs. When the AI capability becomes a vanity add-on, it just adds to the load. So the organisation gets busier, not better.
For genuine productivity gains to take hold, the job itself has to change in parallel. What decisions is this person now freed up to make? What work should they stop doing entirely? What does the role look like when AI handles the first pass? If those questions aren't being asked alongside the training investment, the gains plateau quickly because the binding constraint is not the skill, but the structure.
Another dimension likely to surface in the next 5-10 years concerns junior talents specifically. A lot of “junior-to-mid” growth comes from doing the work that teaches them how to think. If we automate the formative tasks without redesigning learning paths, we save time in the short term but reduce cognitive depth in the long term. In sales, handling objections and prospecting teaches one buyer psychology as you internalise it by doing it badly first and then getting better.
In finance, drafting reconciliations and spotting anomalies manually builds the intuition that makes a senior analyst valuable. In marketing, writing copy and doing research from scratch is how you develop taste and editorial judgement. Think about what that work actually does for someone early in their career. If we compress this apprenticeship phase of knowledge work, we will be producing a generation of knowledge workers who are technically capable but cognitively thin in the domain.
Upskilling with job redesign has to answer two questions simultaneously. Firstly, what does this role look like for an experienced professional i.e. how do we remove the low-value work and genuinely elevate what they own? And secondly, what is the deliberate learning path for someone junior? How do we ensure that AI accelerates their development rather than bypassing it entirely?
Upskilling changes what individuals can do. Job redesign changes what the organisation allows them to stop doing, and how it develops talent while doing it. Without redesign, you may get more activity but also more fatigue, and likely a plateau in productivity gains. With redesign, you could possibly get sustainable productivity and stronger capabilities over time.
Clement Teo is Contributing Editor, iTNews Asia.




