iTnews Asia
  • Home
  • News
  • Security

Poison pill a risk to AI

Poison pill a risk to AI

Training data the sector’s weak link.

By Richard Chirgwin on Oct 30, 2023 10:57AM

A leading cyber security researcher is warning of the risk of artificial intelligence data sets being compromised by “data poisoning”.

Australia's Cyber Security Cooperative Research Centre CEO Rachael Falk explained to the ABC’s AM current affairs program this morning that data poisoning happens when “you can attack an AI data set, and either inject it with false information or misinformation” so that it’s “not correct any more”.

The CRC said AI needs “oversight, transparency and governance measures” to protect users against data poisoning attacks.

It delivered the warning in a report released today [pdf].

The report notes that while some risks are well-known – labour market displacement and privacy threats, for example – data poisoning and human attacks on data labelling are not.

Both attacks involve interference with training data, the report said.

Data poisoning involves “malicious, biased or incorrect data” being incorporated into the training set. The report said incorrect outputs “could enable an attacker to bias decision-making towards a particular outcome, which could result in real-life harms”.

Poisoning types identified in the report include availability poisoning (corrupting the entire machine learning (ML) model, rendering the AI unusable), targeted poisoning (an attack on a handful of samples, making it difficult to detect), backdoor poisoning (training samples give the attacker a backdoor into the model), and model poisoning (attacking the trained model to inject malicious code).

The other risk identified in the report is that humans employed to label data for AI training could be subverted to mislabel malicious data into the training set.

The report noted that AI training is commonly outsourced to countries with cheap labour (including Kenya, Uganda, the Philippines and Venezuela), raising the spectre that officials could be corrupted to poison the training data.

“If this was to occur at scale across a range of different damaging scenarios – and
research indicates only 0.01 percent of training data needs to be poisoned to be effective – the impacts on LLMs [large language models] could be serious and the implications for society damaging,” the report said. 

“Most importantly, such an attack would have a deleterious impact on perceptions of generative AI at a social and cultural level, impacting the positive economic and societal effects these technologies can affect.”

Corrupted training data also opens the door to foreign interference, the report said.

“At a global level, this is an issue that needs to be considered urgently as discussions regarding AI regulation and global norms for AI systems continue”, the report stated.

To reach the editorial team on your feedback, story ideas and pitches, contact them here.
Copyright © iTnews.com.au . All rights reserved.
Tags:
chatgpt rachael falk security software

Related Articles

  • Your organisation’s physical security can be a gateway for cybercriminals
  • The best way to outsmart your threat actors is to think like one
  • How cybercriminals are exploiting LLMs to harm your business
  • Is identity now the next parameter of cybersecurity breaches?
Share on Twitter Share on Facebook Share on LinkedIn Share on Whatsapp Email A Friend

Most Read Articles

Your organisation’s physical security can be a gateway for cybercriminals

Your organisation’s physical security can be a gateway for cybercriminals

Malaysia's Maxis Berhad investigates claims on alleged data breach

Malaysia's Maxis Berhad investigates claims on alleged data breach

DBS plans US$58 million investment to improve technology resilience

DBS plans US$58 million investment to improve technology resilience

Philippines Maxicare, Jollibee Foods Corporation hit by data breach

Philippines Maxicare, Jollibee Foods Corporation hit by data breach

All rights reserved. This material may not be published, broadcast, rewritten or redistributed in any form without prior authorisation.
Your use of this website constitutes acceptance of Lighthouse Independent Media's Privacy Policy and Terms & Conditions.