Report reveals audience distrust and ethical concerns

Credit: Pixabay/CC0 Public Domain

A new industry report has found audiences and journalists are growing increasingly concerned by generative artificial intelligence (AI) in journalism.

Summarizing three years of research, the RMIT-led Generative AI & Journalism report was launched at the ARC Center of Excellence for Automated Decision-Making and Society today.

Report lead author, Dr. T.J. Thomson from RMIT University in Melbourne, Australia, said the potential of AI-generated or edited content to mislead or deceive was of most concern.

“The concern of AI being used to spread misleading or deceptive content topped the list of challenges for both journalists and news audiences,” he said.

“We found journalists are poorly equipped to identify AI-generated or edited content, leaving them open to unknowingly propelling this content to their audiences.”

This is partly because few newsrooms have systematic processes in place for vetting user-generated or community contributed visual material.

Most journalists interviewed were not aware of the extent to which AI is increasingly and often invisibly being integrated into both cameras and image or video editing and processing software.

“AI is sometimes being used without the journalists or news outlet even knowing,” Thompson said.

While only one-quarter of news audiences surveyed thought they had encountered generative AI in journalism, about half were unsure or suspected they had.

“This points to a potential lack of transparency from news organizations when they use generative AI or to a lack of trust between news outlets and audiences,” Thomson said.

News audiences were found to be more comfortable with journalists using AI when they themselves have used it for similar purposes, such as to blur parts of an image.

“The people we interviewed mentioned how they used similar tools when on video conferencing apps or when using the portrait mode on smartphones,” Thomson said.

“We also found this with journalists using AI to add keywords to media, since audiences had themselves experienced AI describing images in word processing software.”

Thomson said news audiences and journalists alike were overall concerned about how news organizations are—and could be—using generative AI.

“Most of our participants were comfortable with turning to AI to create icons for an infographic but quite uncomfortable with the idea of an AI avatar presenting the news, for example,” he said.

Part-problem, part-opportunity

The technology, which has advanced significantly in recent years, was found to be both an opportunity and a threat to journalism.

For example, Apple recently suspended its automatically generated news notification feature after it produced false claims about high-profile individuals, including false deaths and arrests, and attributed these false claims to reputable outlets, including BBC News and The New York Times.

While AI can perform tasks like sorting and generating captions for photographs, it has well-known biases against, for example, women and people of color.

But the research also identified lesser-known biases, such as favoring urban over non-urban environments, showing women less often in more specialized roles, and ignoring people living with disabilities.

“These biases exist because of human biases embedded in training data and/or the conscious or unconscious biases of those who develop AI algorithms and models,” Thomson said.

But not all AI tools are equal. The study found those which explain their decisions, disclose their source material, and ensure transparency in outputs regarding their use are less risky for journalists compared to tools that lack these features.

Journalists and audience members were also concerned about generative AI replacing humans in newsrooms, leading to fewer jobs and skills in the industry.

“These fears reflect a long history of technologies impacting on human labor forces in journalism production,” Thompson said.

The report, designed for the media industry, identifies dozens of ways journalists and news organizations can use generative AI and summarizes how comfortable news audiences are with each.

It summarizes several of the team’s research studies, including the latest study, published in Journalism Practice.

More information:
T.J. Thomson et al, Generative AI and Journalism: Content, Journalistic Perceptions, and Audience Experiences, (2025). DOI: 10.6084/m9.figshare.28068008

Phoebe Matich et al, Old Threats, New Name? Generative AI and Visual Journalism, Journalism Practice (2025). DOI: 10.1080/17512786.2025.2451677

Provided by
RMIT University


Citation:
AI in journalism: Report reveals audience distrust and ethical concerns (2025, February 18)
retrieved 18 February 2025
from https://techxplore.com/news/2025-02-ai-journalism-reveals-audience-distrust.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.