Chatbots' False Information Rate Nearly Doubles in a Year

Chatbots' False Information Rate Nearly Doubles in a Year

lefigaro.fr

Chatbots' False Information Rate Nearly Doubles in a Year

A NewsGuard study reveals that the rate of false information disseminated by AI chatbots has almost doubled in one year, reaching 35% in August 2025, with variations among different models.

French
France
PoliticsTechnologyAiMisinformationDisinformationChatbotsNewsguard
NewsguardAnthropicGoogleInflectionMicrosoft
What is the main finding of the NewsGuard study on AI chatbots and false information?
The study found that the rate of false information produced by AI chatbots nearly doubled in a year, rising from 18% in August 2024 to 35% in August 2025. This indicates a significant decline in the reliability of these tools despite claims of improvement.
What are the implications of this trend for the future of AI chatbots and the fight against disinformation?
The increasing sophistication of disinformation campaigns, particularly from sources like Russian networks, exploiting less-monitored social media, poses a significant challenge. The integration of real-time web data, while beneficial in some ways, requires robust mechanisms to filter out unreliable information to mitigate the spread of misinformation.
How do different AI models compare in terms of false information dissemination, and what factors contribute to this variation?
Claude (10%) and Gemini (16.7%) performed best, while Pi (56.7%) and Perplexity (46.7%) had the highest rates. ChatGPT showed a 40% rate. The increased access to real-time web data, while improving responses, also exposed the models to unreliable sources, leading to increased inaccuracies.

Cognitive Concepts

2/5

Framing Bias

The article presents a balanced view by showing both the improvements and failures of AI chatbots in distinguishing true from false information. However, the headline could be seen as slightly alarmist, focusing on the increase in false information rather than the overall performance. The inclusion of specific chatbot names and their respective error rates provides a factual basis for the claims.

1/5

Language Bias

The language used is generally neutral and objective. Terms like "fake news" and "disinformation" are used accurately, but terms such as 'failed' could be replaced with more neutral alternatives, like 'inaccurate' or 'incorrect'.

3/5

Bias by Omission

While the article covers several major chatbots, it doesn't mention all existing models, potentially omitting perspectives from smaller or less-known AI developers. The focus is primarily on Western models, leaving out potential biases in the selection. The article could benefit from acknowledging this limitation. Additionally, the article could include a more detailed analysis on how this issue affects various populations and their trust in AI.

Sustainable Development Goals

Quality Education Negative
Direct Relevance

The article highlights the increasing spread of misinformation by AI chatbots, impacting the reliability of information used for education. The inability of these tools to distinguish between true and false information directly undermines the quality of information available for learning and research. The significant rise in false information disseminated by AI, from 18% to 35% in a year, poses a serious threat to informed decision-making and critical thinking skills, essential components of quality education.