Beyond speculation: Measuring the Growing Presence of LLM-generated texts in Multilingual Disinformation
Jan 1, 2026·,,,,,,·
1 min read
Dominik Macko
Aashish Anantha Ramakrishnan
admin
Robert Moro
Ivan Srba
Adaku Uchendu
Dongwon Lee
Abstract
Large Language Models (LLMs) have demonstrated unprecedented capabilities in generating human-like text, raising significant concerns about their potential misuse in creating disinformation campaigns across multiple languages. This study moves beyond speculation to provide empirical measurements of the growing presence of LLM-generated texts in multilingual disinformation ecosystems. We analyze the proliferation of AI-generated content across diverse linguistic contexts, examining how different languages and cultural contexts influence the detection and spread of LLM-generated disinformation. Our findings reveal concerning trends in the sophistication and scale of AI-generated multilingual disinformation, highlighting the urgent need for robust detection mechanisms and cross-linguistic approaches to combat this emerging threat.
Type
Publication
In IEEE (Accepted)
Note
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Note
This work provides the first comprehensive empirical analysis of LLM-generated content in multilingual disinformation campaigns, offering crucial insights for developing cross-linguistic detection strategies.
Warning
Status: This article has been accepted for publication in IEEE (2025). Full publication details including DOI and page numbers will be updated upon final publication.
Add the publication’s full text or supplementary notes here. You can use rich formatting such as including code, math, and images.
Large Language Models
Multilingual Disinformation
AI Detection
Cross-Linguistic Analysis
LLM-Generated Text
Authors
Authors
Authors
Authors
Authors
Authors
Authors