Wikipedia in the Era of LLMs: Evolution and Risks
Siming Huang, Yuliang Xu, Mingmeng Geng, Yao Wan, Dongping Chen
2025-03-05
Summary
This paper talks about how AI language models (LLMs) are affecting Wikipedia, looking at changes in page views, article content, and how these changes might impact other areas of language technology
What's the problem?
As LLMs become more advanced and widely used, they might be changing how Wikipedia looks and works. This could lead to problems with how we use Wikipedia for other language tasks and potentially make Wikipedia less reliable as a source of information
What's the solution?
The researchers analyzed Wikipedia data and ran simulations to see how LLMs are affecting Wikipedia. They looked at things like page views, word usage, and how well Wikipedia works for tasks like machine translation and information retrieval
Why it matters?
This matters because Wikipedia is a huge source of information for both people and AI systems. If LLMs are changing Wikipedia, even in small ways, it could affect how we learn and how AI systems work. Understanding these changes helps us make sure Wikipedia stays accurate and useful, and helps us use AI language models more responsibly
Abstract
In this paper, we present a thorough analysis of the impact of Large Language Models (LLMs) on Wikipedia, examining the evolution of Wikipedia through existing data and using simulations to explore potential risks. We begin by analyzing page views and article content to study Wikipedia's recent changes and assess the impact of LLMs. Subsequently, we evaluate how LLMs affect various Natural Language Processing (NLP) tasks related to Wikipedia, including machine translation and retrieval-augmented generation (RAG). Our findings and simulation results reveal that Wikipedia articles have been influenced by LLMs, with an impact of approximately 1%-2% in certain categories. If the machine translation benchmark based on Wikipedia is influenced by LLMs, the scores of the models may become inflated, and the comparative results among models might shift as well. Moreover, the effectiveness of RAG might decrease if the knowledge base becomes polluted by LLM-generated content. While LLMs have not yet fully changed Wikipedia's language and knowledge structures, we believe that our empirical findings signal the need for careful consideration of potential future risks.