iTnews Asia
  • Home
  • News
  • Software

Meta oversight board calls company's deepfake rule 'incoherent'

Meta oversight board calls company's deepfake rule 'incoherent'

Response expected within 60 days.

By Katie Paul on Feb 6, 2024 10:00AM

Meta's Oversight Board has determined a Facebook video wrongfully suggesting that US President Joe Biden is a paedophile does not violate the company's current rules while deeming those rules "incoherent" and too narrowly focused on AI-generated content.

The board, which is funded by Meta but run independently, took on the Biden video case in October in response to a user complaint about an altered seven-second video of the president posted on Meta's flagship social network.

Its ruling is the first to address Meta's "manipulated media" policy, which bars certain types of doctored videos, amid rising concerns about the potential use of new AI technologies to sway elections this year.

The policy "is lacking in persuasive justification, is incoherent and confusing to users, and fails to clearly specify the harms it is seeking to prevent", the board said.

The board suggested Meta updates the rule to cover both audio and video content, regardless of whether AI was used, and to apply labels identifying it as manipulated.

It stopped short of calling for the policy to apply to photographs, cautioning that doing so may make the policy too difficult to enforce at Meta's scale.

Meta, which also owns Instagram and WhatsApp, informed the board in the course of the review that it was planning to update the policy "to respond to the evolution of new and increasingly realistic AI", according to the ruling.

The company said in a statement that it was reviewing the ruling and would respond publicly within 60 days.

The clip, on Facebook, manipulated real footage of Biden exchanging "I Voted" stickers with his granddaughter during the 2022 US midterm elections and kissing her on the cheek.

Versions of the same altered video clip had started going viral as far back as January 2023, the board said.

In its ruling, the Oversight Board said Meta was right to leave the video up under its current policy, which bars misleadingly altered videos only if they were produced by artificial intelligence or if they make people appear to say words they never actually said.

The board said non-AI altered content "is prevalent and not necessarily any less misleading" than content generated by AI tools.

It said the policy also should apply to audio-only content as well as videos depicting people doing things they never actually did.

Enforcement, it added, should consist of applying labels to the content rather than Meta's current approach of removing the posts from its platforms.

To reach the editorial team on your feedback, story ideas and pitches, contact them here.
Copyright Reuters
© 2019 Thomson Reuters. Click for Restrictions.
Tags:
deepfake meta software

Related Articles

  • Philippines’ Security Bank modernises eKYC for secure customer onboarding
  • The outlook for software development in 2025
  • Malaysia launches national AI office for policy, regulation
  • Semyung University transforms IT infrastructure with NetApp
Share on Twitter Share on Facebook Share on LinkedIn Share on Whatsapp Email A Friend

Most Read Articles

The outlook for software development in 2025

The outlook for software development in 2025

Philippines’ Jollibee to modernise applications used by 3,200 stores

Philippines’ Jollibee to modernise applications used by 3,200 stores

Indonesian Bank BRI plans to use AI solutions for enhanced efficiency

Indonesian Bank BRI plans to use AI solutions for enhanced efficiency

Petronas builds unified data hub to enhance business decisions

Petronas builds unified data hub to enhance business decisions

All rights reserved. This material may not be published, broadcast, rewritten or redistributed in any form without prior authorisation.
Your use of this website constitutes acceptance of Lighthouse Independent Media's Privacy Policy and Terms & Conditions.