Tata Communications: The state of AI and where it needs to go

Tata Communications: The state of AI and where it needs to go
123RF

The adoption of AI is no longer a question of 'if' but 'when' across many industries. How can we fully realise the potential of AI? How important are the responsibilities of companies and users in how they should be used?

By on

As the pandemic disrupts business operations and new growth areas emerge, many companies have turned to AI for greater efficiency, speed, resilience, accuracy and access to information.

However, they must also keep in mind the ethical and moral repercussions as a guiding principle when adopting these AI engines. The United Nations and many governments worldwide have included ethical AI as a critical point of their discussion on how the technology should be used by businesses.

Would regulations or frameworks be sufficient in ensuring responsible AI usage? What would it take to prevent any ethical transgressions from happening once AI comes into the picture?  

To delve deeper into these issues, iTNews Asia speaks to Tri Pham, Chief Strategy and Innovation Officer at Tata Communications to uncover how the current AI landscape is evolving, what makes a successful application of AI for organisations, the difficulties that could arise, and how can we make AI more trustworthy.

iTNews Asia: How has Tata been applying AI in your solutions today?

The fundamental things that drive everything that we do within the organisation can be broken down into three themes.  

First and foremost is that we want to have an ethical business that gives back to the community. Giving back to the community is something that is fundamental to the organisation’s culture.

Next, we must have a mindset of innovation, agility, and entrepreneurship that is applied to everything we do.

The last thing is that we see AI as a tool – either to improve operational efficiency or to improve your revenue growth opportunities. 

For example, AI has shown its ability to improve operational efficiencies when it is used to address employee safety via IoT sensors or to assess potential cuts to our network before it happens as a proactive approach such that we could respond to it quicker. In some cases, the use of AI would also be able to automate certain processes resulting in a reduction in cost.

With regard to improving revenue growth opportunities, we have seen banks using AI to develop new FinTech tools to be able to do digital banking or digital credit checks and assessing. However, the level of adoption of using AI to improve revenue – as we see in terms of operational efficiencies – is not the same.

iTNews Asia: Would you say that the IT industry is still in its early stages of adoption and the full potential of AI has yet to be realised?

The way I see AI is that it follows a multi-stage nature. Where we are today, in terms of the adoption of AI, is coming to the point where everyone's now doing digital transformation – capturing data, keeping track of how much data you have, and what's going on with it.

From there, we also see applications of basic AI concepts to do prescriptive measures. AI looks at the data, analyses based on patterns, and then makes some prescriptive recommendations as to what it thinks you should do. But ultimately, a lot of the key decisions is still deferred to an individual person. 

I think the next stage we will see is that as AI becomes more sophisticated, it needs to be able to think more like a human being. For now, AI is basically used for big pattern recognition – using a lot of data to detect patterns. What AI can't do quite yet is replicate how the brain thinks and functions.

The next area of real advancement in AI will have to be in transforming how it thinks more like a person and with that, be able to adopt more of their views of morality and ethics. At that stage, once we feel comfortable that AI can think more and adopt in the way it's thinking through the ethical morality, then you can start to really move to automation instead of focussing on prescription.  

iTNews Asia: How can we ensure that AI is being used responsibly? Does the onus lie on governments to get it right? Or would education play a large role in ensuring that?

As a company, I think that if we're going to rely on government regulations or a framework, then responsible AI usage will always be reactive. This is because it would be too late to address the issue.

What would be more important is having people become more aware of the ethical and moral impact of every business decision. As individuals, we need to be much more aware of these things when we develop ideas or things – the potential ethical and moral impact needs to be part of the thinking and design.

The key thing then in terms of addressing the ethics would be to educate people and making sure that they are aware of what they're trying to do in terms of the objectives and implications.

For example, people should be considering the mindset and how you use the AI tools. What are the potential ethical implications in these corner cases where someone could get hurt for something that could be used?  

The next area of real advancement in AI will have to be in transforming how it thinks more like a person and with that, be able to adopt more of their views of morality and ethics. At that stage, once we feel comfortable that AI can think more and adopt in the way it's thinking through the ethical morality, then you can start to really move to automation instead of focussing on prescription.

- Tri Pham, Chief Strategy and Innovation Officer at Tata Communications

The company culture is another factor that affects responsible usage of AI. If there's a certain leadership mindset, they will recruit people who feel the same way.

As such, if you are a company that cares about ethics or morality as part of your culture, then that would be reflected in the people you bring in. But if you don't – say you're less concerned about it or you're thinking of just trying to maximise profit without regard, then you're going to recruit people like that. It would be difficult to combat this culture.

However, if people have a mindset of factoring the ethical and moral impact from the beginning, then it would really help with ensuring responsible usage of AI.

iTNews Asia: For companies who wish to incorporate AI into its operations, what can they do to assure the workforce that they are not being displaced?

When incorporating AI, companies should be able to do so in a way that does not entirely displace its workforce. Rather, companies will need to be proactive in figuring out how to implement AI and yet also retain as many people as possible and give them potentially different career paths.

For example, within Tata Communications, we have a rescaling programme where we are actively making all the tools available to help people identify areas where they can improve their skill sets.

Companies can introduce these programmes that allow for employees to take advantage of and do the training on their own. Not only should they be allowed a set time to do this, but we also encourage them to because it is part of their KPIs. 

Regardless, the reality of COVID has shown that to be able to adapt and survive, organisations would need to be resilient in terms of how they address challenges and failures. For some, doing so would mean adopting AI tools to facilitate job and location shifts which would inevitably lead to job displacement.

The goal then would be to minimise the displacement as much as possible – looking at the capabilities of your workforce and recommending them courses to help improve and make them more marketable.

iTNews Asia: How do you see the AI landscape evolving over the next three years?

Three years ago, I would have said that driverless cars will be here by 2025. I think I was being overly aggressive. I think that all these things will take time.

Often, people don't realise that having the data and being able to show how things can behave over multiple economic cycles is critical to be able to have a robust AI engine.

For example, we have AI being applied to automated trading models like stock trading that has worked well over the last 30 years – they develop the AI model, and it works. But then something changes, and the model suddenly stops working, resulting in them needing to redo it again.  

Hence, it is critical to have a robust model that can work across multiple cycles. This applies to anytime where you can get AI to work over a certain period, and then making sure to test it so that it stays robust over the years. To be able to get to the point where you can really trust certain things, will take a much longer time than we would like it to be.

Even so, I do think that in the next few years, areas where we will see some really interesting developments are those with a combination of powerful processing abilities via edge compute and storage – either on-premise or mini data centres at the edge of the network.

With these capabilities, you can then apply video information and AI engines to help with smart retail or smart buildings. Not just from an energy management perspective, but also in terms of improving how much you know what is going on in the building to address safety or surveillance.

I think these areas are where significant progress will be made as once the infrastructures are in place, testing and collection of data can be done.

To reach the editorial team on your feedback, story ideas and pitches, contact them here.
© iTnews Asia
Tags:

Most Read Articles