Generative AI (GenAI) poses significant risks in creating convincing yet factually ungrounded content, particularly in “longtail” contexts of high-impact events and resource-limited settings. While some argue that current disinformation ecosystems naturally limit GenAI’s impact, we contend that this perspective neglects longtail contexts where disinformation consequences are most profound. This article analyzes the potential impact of GenAI’s disinformation in longtail events and settings, focusing on 1) quantity: its ability to flood information ecosystems during critical events; 2) quality: the challenge of distinguishing authentic content from high-quality GenAI content; 3) personalization: its capacity for precise microtargeting exploiting individual vulnerabilities; and 4) hallucination: the danger of unintentional false information generation, especially in high-stakes situations. We then propose strategies to combat disinformation in these contexts. Our analysis underscores the need for proactive measures to mitigate risks, safeguard social unity, and combat the erosion of trust in the GenAI era, particularly in vulnerable communities and during critical events.
Add the publication’s full text or supplementary notes here. You can use rich formatting such as including code, math, and images.