Showing 1 to 12 of 136 results


Mixus.ai Combats AI Hallucinations with Human-in-the-Loop Verification
Mixus.ai, a startup, uses human-in-the-loop AI to verify content, addressing AI inaccuracies; it recently secured $2.6 million in pre-seed funding and shifted its business model from B2C to B2B.
Mixus.ai Combats AI Hallucinations with Human-in-the-Loop Verification
Mixus.ai, a startup, uses human-in-the-loop AI to verify content, addressing AI inaccuracies; it recently secured $2.6 million in pre-seed funding and shifted its business model from B2C to B2B.
Progress
56% Bias Score


Kremlin-Linked Fact-Checking Network Faces Criticism for Bias and Inaccuracy
Russia launched the Global Fact-Checking Network (GFCN) in April 2025, an initiative founded by Kremlin-linked entities TASS and ANO Dialog Region; the GFCN faces criticism for its lack of transparency, biased reporting, and inaccurate data, contrasting sharply with established fact-checking standar...
Kremlin-Linked Fact-Checking Network Faces Criticism for Bias and Inaccuracy
Russia launched the Global Fact-Checking Network (GFCN) in April 2025, an initiative founded by Kremlin-linked entities TASS and ANO Dialog Region; the GFCN faces criticism for its lack of transparency, biased reporting, and inaccurate data, contrasting sharply with established fact-checking standar...
Progress
40% Bias Score


Satellite Imagery Confirms Damage to Russian Bombers After Drone Attack
Analysis of satellite imagery confirms significant damage to Russian bomber aircraft at a southern Russia air base following a Ukrainian drone attack, while other images helped verify the location of new aid distribution points in Gaza.
Satellite Imagery Confirms Damage to Russian Bombers After Drone Attack
Analysis of satellite imagery confirms significant damage to Russian bomber aircraft at a southern Russia air base following a Ukrainian drone attack, while other images helped verify the location of new aid distribution points in Gaza.
Progress
16% Bias Score


AI Chatbots Show High Error Rates, Raising Misinformation Concerns
Elon Musk's AI chatbot, Grok, and other AI tools like ChatGPT, Gemini, and Copilot, are showing high error rates in studies by the BBC and Columbia University, highlighting the risk of misinformation and the need for users to verify information from multiple sources.
AI Chatbots Show High Error Rates, Raising Misinformation Concerns
Elon Musk's AI chatbot, Grok, and other AI tools like ChatGPT, Gemini, and Copilot, are showing high error rates in studies by the BBC and Columbia University, highlighting the risk of misinformation and the need for users to verify information from multiple sources.
Progress
40% Bias Score


AI-Generated Reading List Exposes Risks of Unchecked AI in Journalism
The Chicago Sun-Times and Philadelphia Inquirer published a summer reading list generated by AI that contained numerous fake books, highlighting the dangers of using AI without proper fact-checking and the need for human oversight in journalism.
AI-Generated Reading List Exposes Risks of Unchecked AI in Journalism
The Chicago Sun-Times and Philadelphia Inquirer published a summer reading list generated by AI that contained numerous fake books, highlighting the dangers of using AI without proper fact-checking and the need for human oversight in journalism.
Progress
52% Bias Score


Labour's NHS Appointment Target: Slower Growth Than Previous Year
Labour's claim of exceeding its NHS appointment target is challenged by new data showing a slower increase compared to the previous year, despite the significant reduction of the waiting list since their arrival in office.
Labour's NHS Appointment Target: Slower Growth Than Previous Year
Labour's claim of exceeding its NHS appointment target is challenged by new data showing a slower increase compared to the previous year, despite the significant reduction of the waiting list since their arrival in office.
Progress
40% Bias Score

Russia's GFCN: Kremlin-backed Fact-Checking or Disinformation Campaign?
Russia launched the Global Fact-Checking Network (GFCN), a Kremlin-aligned initiative criticized for its biased narratives, opaque operations, and questionable methodology, contrasting sharply with established fact-checking standards.

Russia's GFCN: Kremlin-backed Fact-Checking or Disinformation Campaign?
Russia launched the Global Fact-Checking Network (GFCN), a Kremlin-aligned initiative criticized for its biased narratives, opaque operations, and questionable methodology, contrasting sharply with established fact-checking standards.
Progress
48% Bias Score

Kremlin-Backed Fact-Checking Network Spreads Disinformation
Russia launched the Global Fact-Checking Network (GFCN), a Kremlin-backed initiative using flawed methodology and biased narratives to counter Western fact-checkers; the GFCN's founders include TASS and ANPO "Dialog Regions," both sanctioned for spreading disinformation.

Kremlin-Backed Fact-Checking Network Spreads Disinformation
Russia launched the Global Fact-Checking Network (GFCN), a Kremlin-backed initiative using flawed methodology and biased narratives to counter Western fact-checkers; the GFCN's founders include TASS and ANPO "Dialog Regions," both sanctioned for spreading disinformation.
Progress
52% Bias Score

Kremlin-backed Fact-Checking Network Raises Concerns
Russia launched the Global Fact-Checking Network (GFCN), a Kremlin-backed initiative criticized for its biased narratives, questionable methodology, and lack of transparency, raising concerns about its impact on global information integrity.

Kremlin-backed Fact-Checking Network Raises Concerns
Russia launched the Global Fact-Checking Network (GFCN), a Kremlin-backed initiative criticized for its biased narratives, questionable methodology, and lack of transparency, raising concerns about its impact on global information integrity.
Progress
56% Bias Score

AI-Generated Reading List Exposes Risks of Unchecked AI in Journalism
Two newspapers published an AI-generated summer reading list containing mostly fake books, demonstrating the risks of relying on AI without human fact-checking; the error, attributed to a writer's failure to verify ChatGPT's output, caused a published insert with inaccurate book recommendations, hig...

AI-Generated Reading List Exposes Risks of Unchecked AI in Journalism
Two newspapers published an AI-generated summer reading list containing mostly fake books, demonstrating the risks of relying on AI without human fact-checking; the error, attributed to a writer's failure to verify ChatGPT's output, caused a published insert with inaccurate book recommendations, hig...
Progress
48% Bias Score

Combating Misinformation: Why Businesses Must Embrace Journalistic Fact-Checking
The article emphasizes the critical need for businesses and thought leaders to adopt journalistic fact-checking practices to counter the spread of misinformation, stressing that accuracy and transparent corrections build trust and protect against reputational damage.

Combating Misinformation: Why Businesses Must Embrace Journalistic Fact-Checking
The article emphasizes the critical need for businesses and thought leaders to adopt journalistic fact-checking practices to counter the spread of misinformation, stressing that accuracy and transparent corrections build trust and protect against reputational damage.
Progress
24% Bias Score

AI Chatbots' High Error Rate Raises Misinformation Concerns
A Columbia University study found that eight AI-powered search tools, including Elon Musk's Grok, frequently misidentified sources, highlighting the risk of misinformation spread through AI chatbots; Grok had a 94% error rate, while Perplexity had a 37% error rate.

AI Chatbots' High Error Rate Raises Misinformation Concerns
A Columbia University study found that eight AI-powered search tools, including Elon Musk's Grok, frequently misidentified sources, highlighting the risk of misinformation spread through AI chatbots; Grok had a 94% error rate, while Perplexity had a 37% error rate.
Progress
52% Bias Score
Showing 1 to 12 of 136 results