YouTube’s AI editing scandal reveals how reality can be manipulated without our consent

Disclosure, consent and platform power have become newly invigorated battlefields with the rise of AI.

The issue came to the fore recently with YouTube’s controversial decision to use AI-powered tools to “unblur, denoise and improve clarity” for some of the content uploaded to the platform. This was done without the consent, or even knowledge, of the relevant content creators. Viewers of the material knew nothing of YouTube’s intervention.

Without transparency, users have limited recourse to identify, let alone respond to, AI-edited content. At the same time, such distortions have a history that significantly predates today’s AI tools.

A new kind of invisible edit

Platforms such as YouTube aren’t the first to engage in subtle image manipulation.

For decades, lifestyle magazines have “airbrushed” photos to soften or sharpen certain features. Not only are readers not informed of the changes, often the celebrity in question isn’t either. In 2003, actor Kate Winslet angrily decried British GQ’s choice to alter her cover shot—which included narrowing her waist—without her consent.

The wider public has also shown an appetite for editing images before posting to social media. This makes sense. One 2021 study that looked at 7.6 million user-posted photos on Flickr found filtered photos were more likely to get views and engagement.

However, YouTube’s recent decision demonstrates the extent to which users may not be in the driver’s seat.

TikTok faced a similar scandal in 2021, when some Android users realized a “beauty filter” had been applied automatically to their posts without their consent or disclosure.

This is especially concerning as recent research has found a link between the use of appearance-enhancing TikTok filters and self-image concerns.

Undisclosed alterations extend to offline as well. In 2018, new iPhone models were found to be automatically using a feature called Smart HDR (High Dynamic Range) to “smooth” users’ skin. This was later described by Apple as a “bug,” and was reversed.

These issues also collided in the Australian political sphere last year. Nine News published an AI-modified photo of Victorian MP Georgie Purcell that exposed her midriff, whereas it was covered in the original photo. They did not tell viewers the image they used had been edited with AI.

The issue isn’t limited to visual content, either. In 2023, author Jane Friedman found Amazon selling five AI-generated books under her name. Not only were they not her works, they also posed the risk of significant reputational harm.

In each of these cases, the algorithmic alterations were presented without disclosure to those who viewed them.

The disappearing disclosure

Disclosure is one of the simplest tools we have to adapt to an increasingly altered AI-mediated reality.

Research suggests companies that are transparent about their use of AI algorithms are more likely to be trusted by users with the users’ initial trust in the company and AI system playing a significant role.

While users have demonstrated diminishing trust in AI systems globally, they have also shown increasing trust in AI they have used themselves, including the belief that it will inevitably get better.

So why do companies still use AI without disclosing it? Perhaps it’s because disclosures of AI use can be problematic. Research has found disclosing AI use consistently reduces trust in the relevant person or organization, although not as much as if they are discovered to have used AI without disclosure.

Beyond trust, the impact of disclosures is complex. Research has found disclosures on AI-generated misinformation are unlikely to make that information any less persuasive to viewers. However, they can make people hesitate to share the content, for fear of spreading misinformation.

Sailing into the AI-generated unknown

With time it will only get harder to identify confected and manipulated AI imagery. Even sophisticated AI detectors remain a step behind.

Another big challenge in fighting misinformation—a problem made worse by the rise of AI—is confirmation bias. This refers to users’ tendency to be less critical of media (AI or otherwise) that confirms what they already believe.

Fortunately there are resources at our disposal, provided we have the presence of mind to seek them out. Younger media consumers in particular have developed strategies that can push back against the tide of misinformation on the internet. One of these is simple triangulation, which involves seeking out multiple reliable sources to confirm a piece of news.

Users can also curate their social media feeds by purposefully liking or following people and groups they trust, while excluding poorer quality sources. But they may face an uphill battle, as platforms such as TikTok and YouTube are inclined towards an infinite scroll model that encourages passive consumption over tailored engagement.

While YouTube’s decision to alter creators’ videos without consent or disclosure is likely within its legal rights as a platform, it puts its users and contributors in a difficult position.

And given previous cases from other major platforms—as well as the outsized power digital platforms enjoy—this probably won’t be the last time this happens.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.

Citation:
YouTube’s AI editing scandal reveals how reality can be manipulated without our consent (2025, September 3)
retrieved 3 September 2025
from https://techxplore.com/news/2025-09-youtube-ai-scandal-reveals-reality.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.