The promise and challenge of adopting AI

The promise and challenge of adopting AI

While AI systems are in theory more objective than humans, they do not necessarily lead to a fair result. We must also consider ethical issues and their impact on society.

By on

The pace of innovation and digitalisation of businesses has astounded consumers, industry commentators and business leaders alike in the past years, as adoption of AI has accelerated during the Covid-19 pandemic. 

According to KPMG’s survey, business frontrunners have seen significant year-over-year swing in AI adoption but this pace of innovation and sustainability has been met with skepticism.  

AI’s abilities to improve efficiencies and manage costs appeal to industries looking to evolve digitally, however this cutting-edge promise could potentially cause issues in abuse and manipulation of AI analytics due to low quality data to built-in discrimination bias.  

With remarkable potential for growth and challenges, what should businesses consider when adopting AI solutions? 

The problems with AI

The global pandemic has redefined the traditional workplace and exposed the fragility of many businesses. Organisations have scrambled to reach for every digital tool to keep businesses afloat and overcome tumultuous challenges in serving their customers. One of the challenges is that businesses have adopted AI too quickly.

Despite the promises of the technology,  executives could not see its value. Poor data quality and vast amounts of unstructured data are a significant part of this problem. AI applications are simply unable to interpret accurately the information they have access to. This could result in either insignificant insights or worse –inaccurate conclusions that could impact important business decisions.

For example, A flawed AI model that UK authorities used to analyse people’s earnings had mistakenly concluded that one individual had earned twice her regular wage. This error caused another part of the automated benefits system – the algorithm that applies means-testing rules – to misfire, shrinking the size of the benefit payout. Commentators pointed out that the tech-driven overhaul of country’s social security system could be attributed to greater poverty in the country if left unfixed.

Ethics is another key consideration. Some companies and governments have turned to automation in a bid to remove the human bias that leads to discrimination. However, while AI systems are theoretically more objective than humans, the ability to apply the same decision structure unwaveringly does not necessarily result in different or fairer results.

The far-reaching consequences of the bias came to light in US last year, when a health care risk-prediction algorithm that is used on more than 200 million U.S. citizens, demonstrated racial bias because it relied on a faulty metric for determining healthcare needs. It was revealed that the algorithm was producing faulty results that favoured white patients over black patients.

While the pandemic has undoubtedly accelerated the demand and adoption of AI technologies and increased risk exposure there are checkpoints on list that will help eliminate some of the challenges.

Addressing the challenges

The main culprit of the misbehaviour of AI systems is the data, not the algorithms themselves. To ensure that newly employed AI solutions are delivering on their promise, businesses should find and apply high-quality information by streamlining collection processes and paying more attention to cleansing, labelling, and warehousing the data. These workflow changes, along with better cataloging software, will provide companies with better insights. Paying closer attention to how data is processed helps to ensure that businesses are not building a biased and unfair tool.

As AI and other data-reliant technologies become more popular, they will face increasing attention from regulators. AI developers and users should expect dynamic changes in the area and remain flexible to adopt high privacy, governance, and ethical standards, even before they become a law.

The combined efforts of business leaders, industry experts and regulators are pivotal to the future of AI. It will take a village to ensure that this transformative technology delivers on its promise and does not harm society.

Before considering implementing AI algorithms in critical business decision making, industry leaders should take a step back and see if they have the capabilities to create an efficient system that relies on accurate data and mirrors the diversity and inclusion issues encountered in the real world. In the end, we should be relying on the innovation for a better future.  


Stephen McNulty is President Asia Pacific and Japan, Micro Focus

To reach the editorial team on your feedback, story ideas and pitches, contact them here.
© iTnews Asia

Most Read Articles