The Longtail Impact of Generative AI on Disinformation: Harmonizing Dichotomous Perspectives

Abstract

Generative AI (GenAI) poses significant risks in creating convincing yet factually ungrounded content, particularly in “longtail” contexts of high-impact events and resource-limited settings. While some argue that current disinformation ecosystems naturally limit GenAI’s impact, we contend that this perspective neglects longtail contexts where disinformation consequences are most profound. This article analyzes the potential impact of GenAI’s disinformation in longtail events and settings, focusing on 1) quantity: its ability to flood information ecosystems during critical events; 2) quality: the challenge of distinguishing authentic content from high-quality GenAI content; 3) personalization: its capacity for precise microtargeting exploiting individual vulnerabilities; and 4) hallucination: the danger of unintentional false information generation, especially in high-stakes situations. We then propose strategies to combat disinformation in these contexts. Our analysis underscores the need for proactive measures to mitigate risks, safeguard social unity, and combat the erosion of trust in the GenAI era, particularly in vulnerable communities and during critical events.

Publication
In IEEE Intelligent Systems
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
This work provides a comprehensive analysis of Generative AI’s impact on disinformation in critical contexts where traditional mitigation strategies may be insufficient.

Add the publication’s full text or supplementary notes here. You can use rich formatting such as including code, math, and images.

Jason Lucas
Jason Lucas
Ph.D. Candidate in Informatics