In one of the first communications of its kind, the British Columbia Wildfire Service has issued a warning to residents about viral, AI-generated fake wildfire images circulating online. Judging by comments made by viewers on social media, some people did not realize the images were not authentic.
As more advanced generative AI (genAI) tools become freely accessible, these incidents will increase. During emergencies, when people are stressed and need reliable information, such digital disinformation can cause significant harm by spreading confusion and panic.
This vulnerability to disinformation stems from people’s reliance on mental shortcuts during stressful times; this facilitates the spread and acceptance of disinformation. Content that is emotionally charged and sensational often captures more attention and is more frequently shared on social media.
Based on our research and experience on emergency response and management, AI-generated misinformation during emergencies can cause real damage by disrupting disaster response efforts.
Circulating misinformation
People’s motivations for creating, sharing and accepting disinformation during emergencies are complex and diverse. Some individuals may generate and spread disinformation for a number of reasons. Self-determination theory categorizes motivations as intrinsic—related to the inherent interest or enjoyment of creating and sharing—and extrinsic, which involve outcomes like financial gain or publicity.
The creation of disinformation can be motivated by several factors. These include political, commercial or personal gain, prestige, belief, enjoyment and the desire to harm and sow discord.
People may spread disinformation because they perceive it to be important, they have reduced decision-making capacity, they distrust other sources of information, or because they want to help, fit in, entertain others or self-promote.
On the other hand, accepting disinformation may be influenced by a reduced capacity to analyze information, political affiliations, fixed beliefs and religious fundamentalism.
Misinformation harms
Harms caused by disinformation and misinformation can have varying levels of severity and can be categorized into direct, indirect, short-term and long-term harms.
These can take many forms, including threatening people’s lives, incomes, sense of security and safety networks.
During emergencies, having access to trustworthy information about hazards and threats is critical. Disinformation, combined with poor collection, processing and understanding of urgent information, can lead to more direct casualties and property damage. Misinformation disproportionately affects vulnerable populations.
When individuals receive risk and threat information, they usually check it through vertical (government, emergency management agencies and reputable media) and horizontal (friends, family members and neighbors) networks. The more complex the information, the more difficult and time-consuming the confirmation and validation process is.
And as genAI improves, distinguishing between real and AI-generated information will become more difficult and resource-consuming.
Debunking disinformation
Disinformation can interrupt emergency communications. During emergencies, clear communication plays a major role in public safety and security. In these situations, how people process information depends on how much information they have, their existing knowledge, emotional responses to risk and their capacity to gather information.
Disinformation intensifies the need for diverse communication channels, credible sources and clear messaging.
Official sources are essential for verification, yet the growing volume of information makes checking for accuracy increasingly difficult. During the COVID-19 pandemic, for example, public health agencies flagged misinformation and disinformation as major concerns.
Digital misinformation circulated during disasters can lead to resources being improperly allocated, conflicting public behavior and actions, and delayed emergency responses. Misinformation can also lead to unnecessary or delayed evacuations.
In such cases, disaster management teams must contend not only with the crisis, but also with the secondary challenges created by misinformation.
Counteracting disinformation
Research reveals considerable gaps in the skills and strategies that emergency management agencies use to counteract misinformation. These agencies should focus on the detection, verification and mitigation of disinformation creation, sharing and acceptance.
This complex issue demands coordinated efforts across policy, technology and public engagement:
- Fostering a culture of critical awareness: Educating the public, particularly younger generations, about the dangers of misinformation and AI-generated content is essential. Media literacy campaigns, school programs and community workshops can equip people with the skills to question sources, verify information and recognize manipulation.
- Clear policies for AI-generated content in news: Establishing and enforcing policies on how news agencies use AI-generated images during emergencies can prevent visual misinformation from eroding public trust. This could include mandatory disclaimers, editorial oversight and transparent provenance tracking.
- Strengthening platforms for fact-checking and metadata analysis: During emergencies, social platforms and news outlets should need rapid, large-scale fact-checking. Requiring platforms to flag, down-rank or remove demonstrably false content can limit the viral spread of misinformation. Intervention strategies need to be developed to nudge people about skeptical information they come across on social media.
- Clear legal consequences: In Canada, Section 181 of the Criminal Code already makes the intentional creation and spread of false information a criminal offense. Publicizing and enforcing such provisions can act as a deterrent, particularly for deliberate misinformation campaigns during emergencies.
Additionally, identifying, countering and reporting misinformation should be incorporated into emergency management and public education.
AI is rapidly transforming how information is created and shared during crises. In emergencies, this can amplify fear, misdirect resources and erode trust at the very moment clarity is most needed. Building safeguards through education, policy, fact-checking and accountability is essential to ensure AI becomes a tool for resilience rather than a driver of chaos.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
AI-generated misinformation can create confusion and hinder responses during emergencies (2025, August 19)
retrieved 19 August 2025
from https://techxplore.com/news/2025-08-ai-generated-misinformation-hinder-responses.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.