iTnews Asia
  • Home
  • News
  • Security

Google AI pioneer quit to speak freely about technology's "dangers"

Google AI pioneer quit to speak freely about technology's "dangers"

As pressure to regulate grows.

By Staff Writer on May 3, 2023 2:35PM

A pioneer of artificial intelligence said he quit Google to speak freely about the technology's dangers, after realising computers could become smarter than people far sooner than he and other experts had expected.

"I left so that I could talk about the dangers of AI without considering how this impacts Google," Geoffrey Hinton wrote on Twitter.

In an interview with The New York Times, Hinton said he was worried about AI's capacity to create convincing false images and texts, creating a world where people will "not be able to know what is true anymore".

"It is hard to see how you can prevent the bad actors from using it for bad things," he said.

The technology could quickly displace workers, and become a greater danger as it learns new behaviours.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he told The New York Times.

“But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

In his tweet, Hinton said Google itself had "acted very responsibly" and denied that he had quit so that he could criticise his former employer.

Google did not immediately reply to a request for comment from Reuters.

The Times quoted Google’s chief scientist, Jeff Dean, as saying in a statement: “We remain committed to a responsible approach to AI.

"We’re continually learning to understand emerging risks while also innovating boldly.”

Since Microsoft-backed startup OpenAI released ChatGPT in November, the growing number of "generative AI" applications that can create text or images have provoked concern over the future regulation of the technology.

“That so many experts are speaking up about their concerns regarding the safety of AI, with some computer scientists going as far as regretting some of their work, should alarm policymakers," said Dr Carissa Veliz, an associate professor in philosophy at the University of Oxford's Institute for Ethics in AI.

"The time to regulate AI is now."

To reach the editorial team on your feedback, story ideas and pitches, contact them here.
Copyright Reuters
© 2019 Thomson Reuters. Click for Restrictions.
Tags:
ai google security software

Related Articles

  • How severe will ransomware attacks become in 2026?
  • Identity is now the new cybersecurity battlefield
  • Why APAC organisations must rethink their cloud and AI security
  • Why is fragmentation the next big cybersecurity risk?
Share on Twitter Share on Facebook Share on LinkedIn Share on Whatsapp Email A Friend

Most Read Articles

Acer Philippines reports security incident in third-party vendor system

Acer Philippines reports security incident in third-party vendor system

Indonesia's national data centre suffers ransomware attack

Indonesia's national data centre suffers ransomware attack

How severe will ransomware attacks become in 2026?

How severe will ransomware attacks become in 2026?

Philippine education ministry hit by data leak exposing 210,020 records

Philippine education ministry hit by data leak exposing 210,020 records

All rights reserved. This material may not be published, broadcast, rewritten or redistributed in any form without prior authorisation.
Your use of this website constitutes acceptance of Lighthouse Independent Media's Privacy Policy and Terms & Conditions.